id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
227279982
pes2o/s2orc
v3-fos-license
A Blockchain-Enabled Decentralized Energy Trading Mechanism for Islanded Networked Microgrids Interconnected microgrids are becoming a building block in smart systems. Initiating secure and efficient energy trading mechanisms among networked microgrids for reliability and economic mutual benefits have become a crucial task. Recently, integrating blockchain technologies into the energy sector have gained significant amount of interest, e.g. transactive grid. This paper proposes a two-layer secured smart contract-based energy trading mechanism to allow microgrids to establish coalitions, adjust the electricity-trading price, and achieve transparent and decentralized secure transactions without intervention of a third trusted party. Since reliability benefits are main drivers of microgrids operation in islanded mode, a new decentralized smart contract based-energy trading model for islanded networked microgrids is proposed in the first layer with an objective to achieve demand generation balance. In the second layer, and to achieve a higher security, all executed contracts are verified and saved in a blockchain based on a new developed two-phase consensus method that utilizes practical Byzantine Fault Tolerance (pBFT), and a modified Proof of Stake (PoS). Simulations are conducted in Python environment to validate the proposed energy trading model. T Time horizon (24- The power grid is emerging from centralized power grid, with electric power plants connected to the transmission system, to a decentralized grid, with distributed renewable generating units connected directly to distribution networks close to demand consumption [1]. Microgrids applications have emerged significantly over the past few years and are anticipated to even be deployed in a more comprehensive fashion in the near future. This deployment is because of the increased interest in the smart grid technology, where future smart grid can be pictured as systems of interconnected smart microgrids [1]. IEEE standard 1547.4 [2], has confirmed that representing the large power grids by a group of interconnected microgrids significantly enhances the reliability, resiliency, and sustainability of the network. Thus, recently, a great deal of attention has been paid to the networked microgrid operation. Due to different energy profiles, some microgrids may have deficit energy conditions, while others are having surplus energy supplies. Therefore, when the utility grid tie is unavailable, it becomes imperative to establish coalition formation and energy trading negotiation mechanisms to ensure adequate power sharing among networked microgrids to balance local power generation and demand. This constitutes one of the main drives of this work. Centralized energy trading models are hard to scale for a large number of entities, as well as the centralized scheme is susceptible to cyber-attacks [3]. In addition, the emergence of blockchain and the great attention given to it has led to tremendous amount of interest in using it within information infrastructure to assure secure and decentralized energy trading [3]. This fact is considered as a second driver of this work. In the U.S., the Brooklyn Microgrid project is an example of a first successful Peer-to-Peer (P2P) blockchain system operating through smart meters, where prosumers are able to trade energy based on pre-determined bid price [4]. A. LITERATURE REVIEW AND RESEARCH GAP Integrating blockchain techniques in energy trading for networked microgrids is relatively a new area that has gained lots of interest recently. In the literature, numerous models have been developed for P2P energy trading without integrating blockchain technology, the following section will shed the light on the most related work [5]- [13]. An energy trading model for community-based microgrid in the presence of the utility grid was proposed in [5], in which a market operator determines the spot price by intersecting the demand and the ascending plot of the submitted bids where all offered bids are compared with utility price. The work in [6] proposed a modified auction-based mechanism for a smart community that relies on the interactions between a shared facility controllers (SFCs) and the residential units using a central auctioneer where the auction price is determined using a Stackelberg game. A Stackelberg game to model the interaction between producers and consumers is also adopted in [7] as a noncooperative game for developing a P2P energy trading model in virtual microgrids. Similarly, the model proposed in [8] is formulated as a Stackelberg game, in which a central operator of the microgrid is the leader of the game setting internal buying and selling prices, while, PV prosumers are the followers of the game adjusting their energy sharing profiles in response to the developed internal prices. The trading model in [9] is designed as an auction game to trade energy between energy demanders and producer within a grid connected communitybased DC microgrid. In this case, the demander submits bids to compete for DC power packets, and a controller decides the energy allocation and power packet scheduling. In [10], two auction schemes are developed for a smart multi-energy system considering the day-ahead and real time markets. A multi-agent system manager sells electricity, gas, and heat to users and also trades energy with external systems. Further, a multi-agent based-game theory reversed auction model is introduced in [11] to trade energy between local resources of a grid connected microgrid system. Authors in [12] developed a two-stage bidding strategy for P2P trading of Nanogrid. In the first stage, a two-step price predictor with an objective to promote the utilization of local renewable energy is developed for transaction adjustment; whereas a gametheoretic technique is developed in stage two to increase the social welfare. In [13], a distributed iterative method for P2P trading between PV prosumers within a microgrid system is developed, where an energy-sharing model with price-based demand response is proposed. A dynamical internal pricing model is formulated based on the supply and demand ratio (SDR) of exchanged PV energy. Developing the pricing and negotiation mechanism of P2P energy trading models is mainly affected by the objective of the energy trading, thus, it must be clearly defined [14]. P2P trading can be utilized for both grid-connected and islanded microgrid networks. In the grid-connected mode, the objective of each microgrid is to minimize its operation cost [15]. On the contrary, in islanded interconnected microgrids, when the grid backup is absent, the objective of each microgrid is to achieve a reliable power supply by balancing power demand and generation [15]. The main objective of each peer in most of the models in the literature is to maximize economic benefits considering grid connected mode, where the connection with the power grid guarantees demand-generation balance. Hence, the main incentive mechanism for peers to participate in the trading process is maximizing their economic benefits, where each microgrids will only interact with other microgrids if such interactions lead to additional economic benefits. There is a need for developing innovative decentralized P2P energy trading mechanisms for islanded networked microgrid 211292 VOLUME 8, 2020 system with an appropriate pricing schemes that can incentivize the participation of all microgrids with an objective of maintaining local demand-generation balance. In addition, energy transactions are required to be securely executed without intervention of a third trusted party. Few trading models for islanded microgrids are available in the literature. For instance, the work in [16] proposes a double sided-auction implemented by an aggregator considering approximate price anticipation process. In [17], a nonconvex optimization problem derived from the Stackelberg game-theoretic approach and backward induction is solved by developing a decentralized bilevel iterative algorithm. However, trading models for islanded microgrids in the literature assume that islanded microgrids have sufficient local power sources, that is, demand-generation balance can be always assured locally. Besides, P2P-based auction models are designed assuming the presence of aggregator or auctioneer that facilitates the trading and pricing mechanisms. In terms of integration blockchain technology, few studies have focused on developing decentralized energy trading using blockchain technology [18]- [22]. For instance, the work in [18] proposes a two layers algorithm for blockchain based-energy trading negotiation and transaction settlement among grid connected networked prosumers. In [19], a smart contract method based on energy tokens, where the energy token represents a unit of power at a fixed price is proposed. Authors in [20] extends the energy token method by using a linear time-based value depreciation model for the energy tokens. This method stimulates energy trading by incentivizing the buying and selling of tokens within a time limit. An incentivizing method utilizing Nash bargaining theory is presented in [21]. In [22], the impact of applying load management on reducing the energy cost bought from a blockchain based P2P energy trading market is studied. Proof of Work (PoW) as the blockchain consensus method is used in this work. The system security and privacy are crucial to the successful operation of interconnected energy trading systems, and recently proposed models have turned to blockchain technology to address these concerns [23]- [28]. A blockchain model for detecting data corruption produced by third-part intrusion is proposed in [23]. A modified blockchain approach including a data restoration technique for the event of corruption is presented in [24]. A unified energy blockchain based on consortium blockchain for secure P2P energy trading in industrial internet of things is presented in [25]. To increase the system security and privacy, differentially private energy trading auction using consortium blockchain for microgrids systems is proposed in [26]. The use of practical Byzantine Fault Tolerance (pBFT) as an alternative to inefficient consensus algorithms is proposed in [27]. Energy trading based on blockchain implementation using Hyperledger Fabric considering different energy transaction scenarios and crowdsources is presented in [28]. A secure and efficient Vehicle-to-Grid (V2G) energy trading model by combining blockchain, edge computing, and contract theory is proposed in [29]. In this model, a Stackelberg game-theory is used to determine the optimal pricing strategy of edge computing service. Similarly, a model for Internet of electric vehicles (IoEV)-based demand response (DR) using consortium blockchain, contract theoretical modeling, and computational intelligence is proposed in [30]. In this work, all transactions are created, propagated, and verified by authorized local energy aggregators (LEAGs) with moderate cost, and PoW consensus protocol is adopted by for verifying all created blocks. In the context of using pBFT method, a Tendermint consensus algorithm based on pBFT and Proof of Stake (PoS) is utilized for developing Ethereum smart-contract blockchain node, named ''Hyperledger Burrow'' [31]. Additional hybrid consensus methods have been proposed in the literature, including a two-phase consensus algorithm that combines pBFT with PoW [32], and pBFT with Proof of Authority (PoA) [33]. Based on aforementioned review, it was found that the following two aspects must be tackled for developing innovative blockchain-enabled P2P energy trading models: (i) How to determine the energy trading price and amount, especially for isolated networked microgrids when the utility tie and its retail price is unavailable? (ii) Once a trading transaction is completed, there is a need to develop more secure, energy and time efficiency consensus algorithms to settle those transactions in the blockchain. Besides, existence of malicious nodes that might invalidate the voting process of the consensus mechanism, and manipulate the recorded data need to be considered in the algorithm's development. Otherwise, the blockchain system may become insecure, unreliable, and inefficient. B. ORIGINAL CONTRIBUTION To tackle the above two challenges, this paper proposes a two-layer blockchain-based energy trading algorithm for a group of isolated interconnected microgrids. The two-layer algorithm develops a smart contract-based energy trading mechanism in layer one, and a transaction settlement method is developed in the second layer. The proposed blockchain based contract settlement protocol utilizes a two-phase consensus algorithm consisting of pBFT and a modified PoS to ensure system security, energy and time efficiency. The contribution of this work can be summarized as follows. 1) Significantly distinguished from the work in [2]- [21] that focus on maximizing economic benefits assuming that utility grid back-up and sufficient local resources are available to satisfy generation demand mismatch, the developed price adjustment mechanism promotes the balance of supply and demand between the islanded networked MGs, as well as assuring price fairness multi-agents negotiation mechanism. It is unique for isolated networked MGs when the utility backup is unavailable and achieving reliable operation is the main social welfare for all MGs. 2) Develop a smart contract-based energy trading mechanism to allow microgrids to establish coalitions, negotiate the electricity-trading price, and amount. Instead of completing VOLUME 8, 2020 a transaction through an auction mechanism as widely proposed in the literature, our method is uniquely developed in such a way that the pre-determined smart contracts executed autonomously in a local energy marketplace, where peers (seller and buyer) do not share their data including the energy prices during the trading process. It contributes to the need to develop more privacy-preserving negotiation mechanism for P2P trading models. 3) Distinguished from the work in [14]- [33], this work proposes a new blockchain based contract settlement protocol utilizing a two-phase consensus algorithm consisting of pBFT and a modified PoS to ensure system security, energy and time efficiency. 4) Unlike the two-phase consensus algorithm proposed in [31] that utilizes a PoS selection process based on a predictable weighted round-robin fashion, where a malicious attacker can accurately target a future validator based on the known parameters, our proposed modified PoS method uses a nodes weighting factor that only increases their chance of selection. The selection of the validator is still conducted utilizing a pseudo-random number generator, making it impossible to accurately predict a future validator based on known information. Furthermore, Hyperledger Burrow [31] relies on a PoS method where the stake of a node is some resource of value which must be wagered to receive consideration to be selected as a validator. In our method, the stake of a node is determined by its level of participation in past contracts. For example, a node which participated in three contracts (buyer or seller) in the previous time interval t would have a stake of 3. In this way, a node does not have to offer a resource or asset during consensus to be considered as a candidate for validator. The remainder of this paper is organized as follows: section II provides an overview of blockchain and smart contracts technologies. Section III introduces the energy trading model. In section IV, the numerical simulation results are presented and discussed. Effectiveness of the model results is further demonstrated through a comparative case in section V. The paper is concluded in section VI. II. BLOCKCHAIN AND SMART CONTRACT: AN OVERVIEW Blockchain concept was proposed for a first time in 2008 [34], which can be defined as is a chain that is constructed from many blocks that contain information [35]. All information is then updated synchronously to the entire network so that each peer (node in the network) keeps a record of the same ledger. Using its consensus algorithm, integrity of the information recorded in the ledger can be assured without intervention of a third authority [36]. Various consensus algorithms have been developed such as PoW, PoS, Delegated Proof of Stake (DPoS), Ripple Protocol Consensus Algorithm (RPCA) and AlgoRand [36]. The consensus algorithm is the most important factor of the entire blockchain system, because its efficiency determines the blockchain's performance directly. The main attribute of this technology is to track all created chained-blocks so that no block could be removed or manipulated. This makes the blockchain technology a very secure and trusted decentralized technique for transferring money, and contracts without relying on a trusted third-party [35]. The blockchain is a popular choice for secure transactions in decentralized networks due to the immutability of the blockchain record and the consensus methods used to validate information appended to the chain [37]. The data contained in a block includes the identification of the parties participating in the transaction, the amount of goods being transacted, the timestamp of the execution of the transaction, and an alpha-numeric string called a hash. Aside from the data contained in the block, a unique hash is generated which is appended to the block that identifies each block and its contents [35]. Each block in the chain includes its own hash, and therefore, the blocks are chained together by these uniquely generated hashes. If any of the data in a chained block is modified, the hash associated with that block will change, no longer matching the hash used in the next block and thereby breaking the chain. The smart contract is defined as a computerized transaction protocol that executes the terms of a contract [14]. By converting contractual conditions into code and embedding them into property that enables selfexecuting of trusted transactions and agreements between different, anonymous nodes without the need for a central authority. For blockchain applications, smart contracts are scripts stored on the blockchain with a unique hash [14]. A smart contract is triggered by addressing a transaction to it. It then executes independently and automatically in a prescribed manner on every node in the network, according to the data that was included in the triggering transaction. III. THE PROPOSED ENERGY TRADING MODEL The flowchart of the overall trading model is shown in Fig.1 and described in details in the following sections. A. DESCRIPTION OF SYSTEM MODEL The entire interconnected islanded microgrids system is modeled as a distributed multi-agent network, where each agent (MG) is a node of the network. Multi-agent coalition refers to a way to cooperate agents to complete a task, where none of them can complete it independently [18]. Based on this definition, it was assumed that each MG (agent) consists of only renewable distributed generation and power demand. Since all MGs are connected to each other and disconnected from the power grid, thus, the grid back up is unavailable. Therefore, the task of all MG's operator in the islanded system is to balance local renewable generation and demand. Hence, achieving zero net load is used to measure the level of satisfaction of all participants in the P2P trading. It should be noted that the net load is defined as demand minus renewable generation. All MGs in the islanded system share a common interest which is satisfying their net load; hence, they agree to work in a collaborative manner to satisfy their net load. It was also assumed that each MG does not have sufficient non-renewable local resources (e.g, dispatchable units, storage, controllable loads). Therefore, each MG is extensively incentivized to participate in the P2P trading to balance their net load. This incentive mechanism can be justified based on the fact that reliability benefits are main drivers of microgrids operation in islanded mode [15]. Limited capacity of local resources can be used only as a back-up if the power exchanged in the P2P trading is insufficient to balance their net load. Therefore, it is not required to formulate a scheduling optimization problem since dispatchable units, storage and controllable loads are not primarily used to balance the net load. In terms of the model architecture, each node represents a microgrid consisting of renewable generators, and power demand, local controller, and trade controller in a layered architecture. The renewable distributed generation and power demand are located in the physical resource layer. On top of that there is the local controller (LC), which is mandated to manage the load and renewable generation data (forecasting hourly net load). In the event of energy deficit or surplus, the local controller forwards the information to the trade controller (TC) which is tasked with buying or selling energy to satisfy microgrid's hourly net load for a day ahead 24-h time horizon. A graphical illustration of the system model is given in Fig. 2. B. SELLERS AND BUYERS IDENTIFICATION We assume that all microgrids have the capability to forecast their power demand and generation for a particular time slot t. Forecasting the microgrid demand and renewable generation and uncertainty modeling are not the focus of this work. Hence. the energy production and consumption for each time interval t in the time horizon T is generated randomly by utilizing the Mersenne Twister pseudo-random number generator [38], The Mersenne Twister outputs a statistically uniform distribution between the upper and lower bounds detailed in equations (1), and (2) obtained from [39] with a slight modification. Utilizing Mersenne Twister pseudorandom number generator was based on the fact that for each time horizon T there is a maximum value for the renewable generation and electric load, below which the sub-horizon values are permitted to vary in a quasi-random fashion dictated by a Mersenne Twister pseudo random number generator [40]. The local controller determines the renewable generation based-net load of each microgrid by: where P net (t) < 0 denotes an energy surplus while P net (t) > 0 indicates an energy deficit. If P net = 0 the microgrid has reached generation-demand balance. For all time intervals where P net = 0, the local controller notifies the trade controller of the need to buy or sell energy. A microgrid with a negative net load is identified as a seller, whereas a microgrid with a positive net load is identified as a buyer as shown in equations (4) and (5). C. PRICE ADJUSTMENT AND CONTRACT MATCHING MECHANISM The local controller forwards the energy deficit or surplus to the trade controller. The trade controllers for each microgrid interface with each other in a local energy trading marketplace. The marketplace concept for contract's matching is a commonly used platform for peers that are willing to participate in trading, including in major stock markets such as, the NYSE [41]. For each round r in the time interval t (hourly time interval) in 24-h day ahead scenario, pre-determined energy selling smart contracts are offered at fixed prices by microgrids with surplus power, and energy buying contracts are offered by microgrids with power deficit. Both sellers and buyers aim to get their contracts matched and executed to satisfy their net load since grid backup is absent. The contract matching process is developed as follows. 1) Sellers start with high energy prices and making progressively lower offers for trades to potential buyers after each unsuccessful offering round. Conversely, buyers start with low prices and making progressively higher offers for trades to potential sellers. 2) For each round r in the time interval t, an autonomous contract matching round is done in the marketplace considering the following possible scenarios: (i) If P sell ≤ P buy the contract is automatically executed. (ii) If P sell > P buy , the buyer moves on to the next available contract. (iii) If after the first trading round the offered contract did not get matched, the seller must lower its contract selling price using (6) and the buyer must increase the desired purchase price using (7). The contract will be automatically executed when the pre-conditions of the selling and buying contracts match in the next matching rounds. The contract price adjustment mechanism is developed as follows. 1) For time intervals when P net < 0, the microgrid is designated as a seller (P net is identified as P net sell ) and the trade controller authors a smart energy contract containing the amount of surplus power for sale and the price per kW of the power being sold. The seller calculates the desired selling price for each contract offering round as shown in (6). Pr sell (r) = FP sell − τ C bss P batt + αC cur P net sell − P batt + A i,j C tr P net sell (6) Since grid tie is unavailable for the islanded system, the seller initially attempts to sell with a price higher than utility price, where FP sell is a fixed desired initial selling price given by FP sell > P utility . To ensure price fairness and avoid price adjustment manipulation by the seller, a maximum threshold for FPsell is specified by the marketplace and agreed on by all MG's operators (in this study, it is set to not exceed 1.5 times the utility price). If the offered contract did not get matched in the first round, the seller must lower its selling price. The price is reduced considering the operation cost of battery storage, curtailment cost, and transmission cost, where the second term in (6) represents the battery operation cost for each charging cycle, and the third term represents the energy curtailment cost. The cost of curtailment is modeled as a loss of revenue where C cur = FP sell in the first round and Pr sell (r − 1) for all sequential rounds. It should be noted that curtailment is applied only when the surplus power (P net sell ) is higher than the battery charging limit for each round (|P net sell |>L bss ). Hence, the curtailed amount of power for each round is a percentage of the difference between the surplus power and the power charged in the battery (α( P net sell −P batt )). It should be noted that τ is a binary value (0 for the first round, 1 for all sequential rounds). The fourth term indicates the transmission cost, where A is the distance matrix that represents the distance between any two microgrids in the network, i is the buyer MG index and j is the seller MG index; hence A i,j is the distance between MG i, and MG j. The price adjustment process in (6) is developed based on the fact that sellers would tend to charge and curtail surplus power to satisfy its net load if they did not sell their excess power. This is a reasonable adjustment since the microgrid will have to pay these costs if the trading scheme is unavailable. The seller will go back and adjust its selling price after each round until it gets its contract matched and executed. 2) For time intervals where P net > 0, the microgrid is designated as a buyer (P net is identified as P net buy ) and the trade controller enters the marketplace to evaluate potential contract purchases. The buyer enters the marketplace with a desired purchase price calculated using (7). Pr buy (r) = FP buy + τ (C D P D + βC sh (P net buy − P D )) (7) It should be noted that FP buy is a fixed buying price given as a percentage of the utility grid retail price where FP buy < Pr utility (buyer intends to pay less). To avoid manipulation of the price adjustment by the buyer, a minimum threshold for FPbuy is specified by the marketplace and agreed on by all MG's operator (50% of the utility price is adopted in this study). In addition, all cost parameters (e.g, C D , C sh , C cur , C tr ) used in price adjustment equations are constant and determined by the local marketplace in which all peers are trading. If the offered buying contract did not get matched, the buyer will increase its offered buying price. The price is increased considering the operation cost of dispatchable units, and load shedding cost, where the second term in (7) denotes the dispatchable unit operation cost for a committed cycle, and the third term indicates the load shedding cost. It should be noted that load shedding is applied only when the deficit power (P net buy ) is larger than the backup dispatchable units output power limit for each round (|P net buy | > L res ). Hence, the amount of deficit power to be shed is a percentage of the difference between the deficit power and the power supplied by the dispatchable unit (β ( P net sell − P batt )). The price adjustment process in (7) is designed based on the fact that buyers would have to get power from back up dispatchable units, as well as applying load shedding to balance its deficit net load if they did not buy power. The buyer will go back and adjust (increase) its buying price after each round until it gets its contract matched and executed. The complete contract price adjustment and execution algorithm is shown in Table 1. Sellers and buyers prepare their contract condition offchain, and then they compile and deploy their smart contract for possible execution to the local marketplace using an appropriate blockchain architecture that supports smart contract and deterministic consensus protocols. Hyperledger is a private blockchain software [42] that provides a modular architecture that makes it simple to implement smart contracts, and deterministic pBFT-bases distributed consensus [43]. For instance, the energy trading model proposed in [44] adopted Hyperledger platform for implementing the proposed model. To avoid trade manipulation, prospective energy buyers do not share their desired purchase prices with energy sellers. Sellers that are aware of desired buying prices can manipulate trade by i) overvaluing their contracts by holding to a higher price knowing that prospective buyers will raise the desired buying price to meet energy demands or ii) undervaluing their contracts in order to undercut competition and execute more contracts. The converse is true for buyers manipulating buying prices. Therefore, prices should not be shared between buyers and sellers during contract's matching process. This is done by encrypting the data included in the contract before it is broadcast in the marketplace. For instance, the confidential transactions technique discussed in [45] can be adopted where the buyer and seller have contracts that contain price and other confidential information. The technique of confidential transactions is to keep the price amount secret and to grant verifiers the ability to check the validity of amounts [46]. In this case, buyer and seller perform a two-stage encryption process. At first, buyer and seller perform cryptographic hash operation on their contracts to preserve the confidentiality and authenticity of data to each other. Then, the buyer adds a public-key or asymmetric cryptography to further protect data from 3 rd party (intruder) intervention/malicious party. In particular, each of the buyer and seller generate a public-key and a private-key. The buyer then encrypts its (cryptographic hashed) contracts with the public key of the seller. The seller then decrypts data using its own private key. Once the seller decrypts the other party's data, its smart contract system performs price-matching. Note that this price-matching operation contains a smart algorithm that can work on the cryptographic-hash (price) amounts from buyer and seller and make a decision [45]. Once this contract matching operation is done, it informs the buyers and the sellers about its decision. In this way, buyer and seller are not exposed to the price of each other and hence the overall confidentiality is preserved. D. TWO-PHASE BLOCKCHAIN CONSENSUS PROTOCOL To enable a trusted settlement of electricity trading transactions, a smart blockchain based-contracts protocol for transaction settlement is developed. The proposed blockchain method uses a traditional distributed ledger consisting of blocks of data that are connected in a single chain. These blocks of data contain the details of the finalized contract from the trading marketplace, including the network address of the buyer and seller, the amount of energy being trading, the price per kilowatt of the contract, the timestamp when the contract was executed, the hash from the previous block, and a new hash generated using the SHA-256 hashing algorithm. Because this ledger chain is a distributed ledger, each node of the network maintains a copy of the ledger. Before a block is appended to the ledger chain, it must be validated using a consensus method. A two-phase consensus process method is proposed. In the first phase, a pBFT is adopted. pBFT has been proposed in recent years as a viable alternative to popular consensus methods such as PoW and PoS. Byzantine Fault Tolerance (BFT) refers to the ability for a distributed network to reach an assured consensus despite the presence of faulty or malicious nodes that propagate false data. The consensus process developed in this work is shown in Fig.3. The pBFT is an optimized application of traditional BFT method, which ensures consensus for any network of size 3f + 1 when there exists 2f + 1 validating responses (where f denotes the maximum number of faulty nodes). The pBFT works by a voting consensus where each node has an equally weighted vote value. For each block validation process the following steps are implemented. 1) Initiate: a random node (microgrid) is selected to be the primary node. The primary node broadcasts the proposed block including the contract data to each of the secondary nodes in the network. 2) Acknowledge: Each of the secondary nodes broadcasts a vote to acknowledge their receipt of the proposed block to each node. 3) Validate: After receiving 2f + 1 approval messages, a node will broadcast a validation message if the data in the proposed block is valid. Finalize: When 2f + 1 validation messages are received, the block has been validated and is moved to the second phase of the consensus process. It should be noted that the pBFT is secure for any network of size 3f + 1 where f is the maximum number of faulty nodes. Therefore, in a system where greater than 1/3 of nodes are faulty (corrupted or non-functioning), the pBFT no longer ensures a secure consensus. In our model, the voting criteria is selected to be 2f + 1 because this criteria is greater than 2/3 of the network size. While a 2/3 criteria is sufficiently safe, the margin of error is a motivating factor for introducing the modified PoS as a second phase of consensus. To ensure a high level of security, a simultaneous second phase of consensus is conducted using a modified version of PoS. For this consensus method, each microgrid is assigned a semi-random value generated using a weighting factor. This weighting factor corresponds to the recent history of participation in the energy trading marketplace, where microgrids with higher levels of participation are assigned higher weighting factors. The microgrid with the highest stake value during each consensus round is chosen as the validator node. The validator node constructs a block using the same contract data as was broadcast in the pBFT and compares its block to the one validated using the pBFT consensus. If these two blocks match, the validator broadcasts a final confirmation that the block is valid, and it is appended to the public chain. If the two blocks do not match, it indicates a cyber-attack activity that has manipulated the formed block, and an alarm will be activated to report a data manipulation incident. The overall two-phase consensus algorithm is illustrated in Table 2, where each microgrid is described as a prosumer. In comparison with other common blockchain consensus methods, pBFT shows several advantages. Firstly, pBFT has no fixed time requirement before consensus can be reached. PoW and traditional PoS both have fixed time interval requirements before a proposed block can be validated. Additionally, pBFT does not require additional resources specific to the blockchain protocol, where it only uses existing network topology to perform digital communications. PoW requires expensive computing equipment to perform tasks which consume significant amounts of energy. Furthermore, traditional PoS requires nodes to have expendable financial resources in order to wager for validation rights. It is worth mentioning that pBFT-based consensus has a scalability issue when it is used for a massive number of nodes. However, partitioning the network into smaller groups called federates resulting in an improved scale up to 1000 nodes [43]. IV. SIMULATION RESULTS AND DISCUSSION The proposed model was simulated using Python 3.6 in Microsoft Visual Studio Professional 2017 on a quad-core 2 GHz CPU equipped with 16 GB RAM. For adjusting the contract price, the charge cost of the battery storage (C bss ) is considered to be 0.03 $/kW [47], with a charging limit per round of 2 kW. Whereas, the adopted operation cost of dispatchable unit (C D ) is 0.25 $/kWh [48], with a slight modification considering a ramp rate of 5 kW per round. The load shedding cost (C sh ) adopted in in this study is 1.0 $/kW [49]. The maximum curtailment ratio for each round (α) is taken as 1% of the hourly surplus net load value. The load shedding ratio for each round (β) is taken as 4% of the deficit hourly net load value. Transmission cost (C tr ) is considered to be 2.8×10 −6 $/(kWh. km) [50]. The simulation is carried out for day ahead time horizon with T = 24 hours, t = 1 hour and up to 20 microgrids forming the networked system (incremental increase of 5 microgrids). Using equations (6) and (7), buyer and seller (microgrids) successfully adjusted their desired buying and selling prices where 216 successful contracts were executed with a total traded power of 419.63 kW during the 24-h day-ahead time horizon in the case of 10 interconnected microgrid system, whereas 456 successful contracts were executed with a total traded power of 937.17 kW in the case of 20 interconnected microgrid system. Fig. 4 shows an example of the progressive price adjustment to the desired buyer (MG2) and seller (MG5) prices over successive contract matching rounds for one contract. It is clear that the seller decreases the price after each round (blue line), while the buyer increases his offered purchase price after each round (red line), and a contract match occurred at a price of 0.198 $/kW. Initially the seller set its selling price to 0.25 $/kW, and buyer set its price to 0.15 $/kW, so there was no contract match in the first round. In the second round the seller decreases its price to 0.2329 $/kW based on (6), and the buyer increases its price to 0.175 $/kW based on (7). However, there was still no contract price match. In the third round, the seller decreases its price to 0.215 $/kW, and buyer increases its price to 0.2 $/kW. In the fourth round, the seller reduces its price to 0.198 $/kW, and the buyer increases its price to 0.225 $/kW hence, a contract is executed at the offerd seller price of 0.198 $/kW (for 1.69 kW transaction) because the seller price now is less than the buyer price. Additionally, the average computation time required to complete discrete tasks (price adjustment, contract offering for possible matching) required to execute a final contract is recorded for a varying number of microgrids in the network, as shown in Fig. 5. The average computation time to execute all contracts in the case of 20 microgrids is found to be less than one second (around 15.25ms), which demonstrates the efficiency of the proposed trading model. It can also be noted that the average contract matching time increases with the increase in the number of microgrids in the network in almost a linear relationship. This is because when the number of microgrids increases, for a certain task, more microgrids would have the opportunity to participate in the local marketplace and conduct price adjustment process to execute their contracts. This can be verified by looking at Fig. 6 that shows the relationship between the number of formed contracts and the number of interconnected microgrids. It was found that the model has successfully satisfied the objective of all networked MGs by balancing their net load, as well as allowing them to trade at their offered desired price. Each microgrid with power deficit successfully purchased power to meet demands while microgrids with power surplus sold off their excess power; hence, demand-generation balance for the islanded interconnected system has been achieved. For secure settlement of all transactions, all executed contracts are saved in the blockchain where each block in the contract chain contains only one energy trading contract. All blocks are created and a blockchain is formed and verified using the proposed two-phase consensus mechanism. Each validated block in the chain contains, (i) buyer and seller identification IDs, (ii) amount of traded power, (iii) a transaction price in $/kW, (iv) the timestamp of the execution of the transaction, (v) an alpha-numeric string called a hash, which is taken from the previous block (Each block also references a previous block, known as the parent block, through the ''previous block hash''). A sample of two contracts data included in the generated blocks is shown in Fig.7. In order to investigate the impact of the microgrids number in the network on the validation time required for the proposed consensus method, the validation time is measured as the difference between transaction submission time and confirmation time. The average validation time for this method was calculated for an increasing number of microgrids, as shown in Fig. 8. It can be observed that the validation time increases accordingly (approximately a linear increase rate) with the increase in the number of networked microgrids. This is due to the fact that more microgrids (more nodes in the network) will increase the number of executed contracts, and formed blocks, which would correspondingly require a longer time for the validation process. The validation time required to validate all created blocks in a 20 microgrids network is found to be around 1.9 seconds as shown in Fig 8. V. EFFECTIVENESS OF OBTAINED RESULTS To ensure effectiveness of the proposed model, the model results were compared with the results of the recent work proposed in [18] and [51]. Table 3 depicts the full comparison, which demonstrates the time efficiency of the proposed energy trading model (less negotiation time for the same number of nodes, and improvement in the success rate of the transaction, where all deficit and surplus power are satisfied). The proposed method is time efficient compared to the traditional methods that applies a direct price negotiation between peers. In traditional direct price negotiation methods, both negotiators are fully dedicated to take advantage of the offered contracts by the other peer and bring the other peer closer to their offered price. This increases the contract determination computation time and might lead to unsuccessful negotiation process, which can cause a reliability problem when the grid back up is absent for isolated networked microgrids. For a quick comparison of our negotiation method with the commonly used game theory-based developed algorithms in the literature, our contract determination time (conversion time) is compared with the result of the two game theory-based algorithms proposed in [51] as shown in Table 3. In the case of 20 interconnected microgrids, the results in [51] show an average convergence times of 0.025 sec, and 0.05 sec, respectively. However, our negotiation method shows a shorter negotiation time of 0.0155 sec for the same number of microgrids as shown in Fig.6, which demonstrates the time efficiency of the proposed energy trading model. Finally, and with regard to justifying the fast validation time of our proposed consensus method, the work in [52] confirms that in networked computer systems, a pBFT algorithm can be executed in the order of milliseconds. Furthermore, in the modified PoS algorithm being proposed in the second phase of the consensus process, the stake is calculated automatically based on pre-existing data without a time constraint. Therefore, the PoS algorithm does not significantly impact the validation time. VI. CONCLUSION In this paper, a two-layer blockchain-based energy trading algorithm for a group of isolated, yet interconnected microgrids is proposed. The two-layer algorithm develops a preconditioned smart contract-based energy trading in layer one, and a novel two-phase blockchain-based contract settlement protocol is developed in the second layer. It was found that the proposed electricity trading mechanism can efficiently promote energy trading between isolated networked microgrids to assure system reliability when the grid backup is unavailable, as well as offers a price fairness negotiation mechanism for all peers in the islanded network. It also promotes the use of renewable energy sources. In addition, the proposed smart contract trading mechanism is executed in a local energy market where peers do not share their data, which guarantees privacy preservation for all microgrids in the network. Results have also shown that the proposed method is a time-efficient method compared with traditional P2P price negotiation protocols available in the literature. Furthermore, the verification of the transactions before adding them into the blockchain is done using a novel two-phase consensus algorithm. The adopted pBFT in the developed consensus algorithm takes into account the existence of malicious nodes that might invalidate the voting process of the consensus mechanism. Meanwhile, it is both time and energy efficient method compared with traditional PoW-based consensus methods that require expensive computing equipment to perform tasks resulting in significant energy consumption. Future work may focus on developing an energy transaction model based-blockchain considering technical operation aspects of networked microgrids (e.g. tie-line congestion and voltage stability).
2020-11-19T09:16:22.401Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "7f45515da836453dd71fd38a5f7a978a9e537770", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09261445.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "e76d37193bf55ee1c0880c6810dcb382a72a37ed", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54668990
pes2o/s2orc
v3-fos-license
Palynological and physicochemical characteristics of three unifloral honey types from central Argentina The characteristics of 59 unifloral honeys of Condalia microphylla Cav. (“piquillín”), Centaurea solstitialis L. (“yellow starthistle”) and Prosopis spp., from La Pampa, Argentina, were studied. Pollen features (abundance and frequency of pollen types) and some physicochemical parameters (colour, electrical conductivity, free acidity, glucose content, glucose:water ratio, moisture and pH) were determined. Two different but related sets of calculations were done: the first involved single-factor variance analysis, while the second set involved two multivariate methods, principal component analysis and cluster analysis. Variance and multivariate analysis allowed differentiation of the three honey types according to their physicochemical properties. The variables that best explained this differentiation were pH, electrical conductivity, colour, glucose content and the glucose:water ratio. Pollen analysis showed that the pollen frequency traditionally used (> 45%) for a botanical origin assignment in honey was not valid for the unifloral honeys studied. Therefore, pollen analysis should be combined with the above physicochemical analysis order to obtain a successful classification of these unifloral honeys. Additional key words: botanical origin, Centaurea solstitialis, Condalia microphylla, melissopalynology, multivariate analysis, pollen analysis, Prosopis spp. Introduction Honey is a natural substance produced by bees (Apis mellifera L.) from flower nectar, and from honeydew.The composition and properties of honey depend on the botanical origin of the nectar or secretion used.Consequently, its composition is influenced by many factors and is subject to variation.Several studies have attempted to establish suitable parameters for honey from the same botanical source.Honey is characterized by its palynological, chemical and physical properties. Pollen analysis appears to be the most frequently used method of honey identification (Louveaux and Vergeron, 1964;Louveaux et al., 1978;Anklam, 1998;von der Ohe et al., 2004), although in some honeys it is difficult to establish their exact origin (von der Ohe, 1994;Hermosin et al., 2003). As honey is a complex natural food, clear characterization of honey samples requires the use of several parameters.To establish the combinations of these parameters closely related to the origin of honey quality control methods and multivariate statistical analysis need to be used.These methods will help to evaluate honey samples in their totality and give more precise classifications (Anklam, 1998;Ruoff et al., 2007). The objective of this work was to characterize unifloral honeys from three different botanical sources produced in La Pampa Province, Argentina.This was done using data from melissopalynological and physicochemical analysis to attempt the classification of honey samples according to their botanical origin. Study area Unifloral honeys from Prosopis spp.and Condalia microphylla are from the Caldén District of Espinal Phytogeographical Province.Honeys from Centaurea solstitialis L. are from Pampean Province (Cabrera, 1994). The Caldén District -usually called Caldenal -spreads over the central semiarid region of Argentina.In central Argentina Centaurea solstitialis ("yellow starthistle") is a winter annual or biennial adventitious species.In natural areas and on rangelands it forms dense impenetrable stands that displace desirable vegetation.Thus, yellow starthistle is a principal nectar source during the summer in disturbed areas of steppe and caldenal. Sample collection Fifty nine (n = 59) typical honey samples, from Apis mellifera, were collected by beekeepers in 2003, 2004 and 2005.They were obtained by centrifugation and stored at room temperature until analyzed. The honeys were harvested from different areas of La Pampa Province, Argentina (Fig. 1). Sensory analysis (crystallization type) was considered as a complementary criterion.Some Prosopis spp.and Condalia microphylla honeys were rejected, in the first case because of creamy crystallization and the second because of non-crystallization.Both cases showed atypical characteristics of these honey types. Pollen analysis The pollen spectrum of the honey samples was determined using the acetolytic method (Erdtman, 1960) and the method of Louveaux et al. (1978).Honeys from central Argentina show very few honeydew elements; therefore they were not calculated (Tellería, 1996;Andrada, 2001;Fagúndez and Caccavari, 2003).The different pollen types were identified by comparing them with a reference collection, made from plants of the area.The preparations were deposited in the Palynological Collection of the Facultad de Agronomía, Universidad Nacional de La Pampa.The identified pollen was classified, according to frequency, into four classes: dominant (> 45%) = D, secondary (16-45%) = S, important minor (3-15%) = M, trace (<3 %) = T (Louveaux et al., 1978).To determine frequency class, 1000 pollen grains were counted. The absolute pollen content (APC) of the honey samples (i.e., the number of pollen grains in 10 g of honey) was calculated using tablets of Lycopodium spores (Stockmarr, 1971).Following Louveaux et al. (1978) five groups were considered: Group I (honey low in pollen < 20,000/10 g), Group II (normal honey 20,000-100,000/10 g), Group III (honey rich in pollen 100,000-500,000/10 g), Group IV (honey extremely rich in pollen 500,000-1,000,000/10 g), Group V (pressed honeys >1,000,000/10 g).Quantitative analysis of honey samples with over -or under-represented pollen was conducted according to Moar (1985) who suggested standard honey as Trifolium repens L. honey, which has 45% of dominant pollen and is in Group II.Moar (1985) also explained how to estimate an adjusted absolute pollen content and an adjusted minimal pollen percentage for a honey sample to be classified as unifloral. The honey samples were analyzed using the following methods: -Colour was determined with a Coleman spectrophotometer by reading the absorbance in aqueous solutions at 635 nm (10 g honey in 20 mL water). Table 1 shows honey colours and their absorbance and mm Pfund values, obtained using the following algorithm (Bianchi, 1990): mm Pfund = -38.7 + 371.39 x Absorbance.-Electrical conductivity (EC, mS cm -1 ) was determined with a Luftman C400 conductivity meter in 20 (w/v) aqueous honey solution (dry matter basis).-Free acidity: acidic components were neutralized with a standard solution of sodium hydroxide in aqueous honey solution (10 g in 75 mL of double distilled water).-Glucose content was determined by the glucose oxidase -peroxidase method (GOD-POD).Absorbance was measured at 595 nm using in a Metrolab 1700 spectrophotometer.-Glucose:water ratio (G:W) was obtained from water and glucose content percentage (White et al., 1962).-Moisture was determined with an Abbe-type refractometer.The refractive index was correlated using Chataway Charts.-Active acidity (pH) was determined, in aqueous solution, with a Horiba B-213 pH meter. Statistical analysis Analysis of variance and multivariate analysis were performed using Statgraphics Plus 3.1 software.Differences among means were determined for significance at P = 0.05 using the least significant difference (LSD) test.Principal components analysis (PCA) and cluster analysis (CA) were used to reduce the dimensions of the 7 x 59 data matrix, to determine relationships between physicochemical properties (variables) and honey samples (objects) through optimal graphical 2-D and to define groups between unifloral honeys.To determine similarities among samples and variables, the Euclidean distance and group average method were used. Pollen analysis Table 2 shows the frequency of occurrence of pollen types in the three unifloral honeys.A total of 71 pollen types, from 35 plant families, were identified in the honey samples analysed. In Prosopis spp.honeys 43 pollen types were identified with 5 to 20 types per sample.Brassicaceae and Schinus spp.were present as secondary pollen.In the minor importance class were Centaurea solstitialis, Vicia spp., Eucalyptus spp., Condalia microphylla and Lycium spp. In Condalia microphylla honeys 31 pollen types were identified with 5 to 18 types per sample.These honeys were characterized by Vicia spp.and Eucalyptus spp. as secondary pollen and Brassicaceae, Schinus spp., Prosopis spp.and Larrea spp.pollen being of minor importance. Table 3 shows the adjusted (APC adj , DP adj ) and nonadjusted (APC, DP) values for absolute pollen content and the percent dominant pollen for each honey type. The absolute pollen content indicated that pollen richness is a characteristic of Condalia microphylla honeys (Group III), whereas Prosopis spp.honeys belong to Group II and Centaurea solstitialis honeys to Group I respectively.The percent dominant pollen adjusted (DP adj ) according to Moar (1985) indicated that the C. solstitialis honeys require 31.5% of their pollen to be considered unifloral.The Condalia microphylla and Prosopis spp.honeys would require the 64.5% and 75% respectively.honey samples.All parameters showed high discrimination power in these honeys.However, moisture content only differentiated Prosopis spp.honeys from the others.In terms of colour, pH, free acidity and EC Condalia microphylla honeys had the highest values while Centaurea solstitialis and Prosopis spp.honeys had the highest glucose content and G:W ratio. Multivariate analysis (CA and PCA) Cluster analysis showed that there were two clusters at a linkage distance level of 6 corresponding to the different botanical origins (Fig. 2).From right to left, the first cluster is composed of Condalia microphylla honey samples.The second cluster has two sub-clusters, one composed of Centaurea solstitialis samples and the other of Prosopis spp.samples. Multivariate CA of variables using the group average method and squared Euclidian distance showed two distinct groups.One group was pH, conductivity, colour and free acidity and the other group was glucose content, G:W ratio and moisture (Fig. 3). To cluster the three botanical types of honey based on their physicochemical properties, a standardized PCA was applied.Principal component 1 (PC 1 ) and PC 2 accounted for 76.8 (i.e., 53.1 + 23.7) of the total variance.Fig. 4 shows the correlation circle where moisture content is located near the origin of two PCs; this variable does not influence group formation in Fig. 5.The first component was positively correlated to colour, EC and pH and negatively related to glucose content and the G:W ratio.The second component was positively correlated to pH and EC and negatively correlated with free acidity.Condalia microphylla honeys had high, positive, PC 1 scores (reflecting dark colour and high values of EC, free acidity and pH).Centaurea solstitialis and Prosopis spp.honeys had high negative PC 1 scores (reflecting high glucose content and G:W ratios).The last variable is related to fast granulation observed in the Centaurea solstitialis and Prosopis spp.honeys, while Condalia microphylla honeys had low, or no granulation (personal observation).Despite Prosopis spp.and Centaurea solstitialis honeys appearing very close on the biplot, they still formed two different groups; the first on the positive side and the second on the negative side. Prosopis spp.honeys belong to Group II and would require 64% dominant pollen to be considered a unifloral honey because their dominant pollen is slightly overrepresented in the samples.Thus, the usual pollen frequency of > 45% to assign honey botanical origin is not valid for the unifloral honeys studied in this work. With regard to physicochemical analysis all the variables analyzed are widely recognized in the evaluation of the botanical origin of honey, Honey colour is closely linked to botanical origin is used for honey classification.Generally, colour is related to sensory properties such as flavour and odour.Several factors can influence honey colour such as floral source, mineral content and storage conditions (Tha-wley, 1969;Salinas et al., 1994).Prosopis spp.honeys were a white water colour (± 7.9 mm Pfund) while Centaurea solstitialis honeys were white (± 22 mm Pfund) and Condalia microphylla honeys were dark and ranged from light amber to dark (> 140 mm Pfund).The honey colour of the C. microphylla and Prosopis spp.honey agreed with the results of Andrada (2001). Honey EC is closely related to the mineral concentration (total ash), salts, organic acids and protein.The EC varies greatly with honey floral origin because the conductivity and the ash content depend on material collected by the bees during foraging (Terrab et al., 2002;Serrano et al., 2004;Ouchemoukh et al., 2006).The EC results varied widely depending on honey type.The C. microphylla honeys had the highest EC values compared with the Prosopis spp.and Centaurea solstitialis honeys. Variation in free acidity among different honeys can be attributed to floral origin (El-Sherbiny and Rizk, 1979) or to variation due to the harvest season (Pérez-Arquillué et al., 1994).Free acidity differed widely among the three honey types, it was lowest in the Prosopis spp.honeys while Condalia microphylla honeys had the highest values.Andrada (2001) reported free acidity values in C. microphylla honeys which were lower than in these samples.This could be related to the presence of secondary pollen from Prosopis spp. in those honeys. The glucose content of any honey type depends largely on nectar source (Anklam, 1998).Honey samples of different botanical origin had a wide range of glucose content.Values in Condalia microphylla honeys were low as found by Tamame and Naab (2003).The granulation rate and the tendency to granulate are directly related to parameters such as the glucose, water and fructose content (White et al., 1962;Manikis and Thrasivoulou, 2001).The average ratio of glucose:water is a criterion for the prediction of granulation tendency; its application to these honeys showed it was a good predictor of granulated and non-granulated honeys as found by Manikis and Thrasivoulou (2001) in Greek honeys.The low G:W ratio in the C. microphylla honey samples (≤1.7) confirms that these are less prone to granulation and would remain liquid for longer periods.The Prosopis spp.and Centaurea solstitialis honeys presented G:W ratio ≥2.1.However, Prosopis spp.samples showed faster crystallization than C. solstitialis honeys. In non adulterated honeys the moisture content depends on botanical origin, harvest season, processing techniques and storage conditions.The moisture content of the samples indicated a proper degree of maturity in agreement with international requirements (Codex Alimentarius, FAO-OMS, 1990).The relatively high moisture values in Prosopis spp.honeys could be due to the early, spring harvest.Basilio and Nöetinger (2002) reported a similar moisture content in Prosopis spp.honeys from the Chaco Region of Argentina.On the other hand, the low moisture in the Centaurea solstitialis honeys can be related to the low relative humidity of the semiarid climate of the study area as found by Andrada (2001). Floral honeys are acidic, with a pH of 3.0 to 4.3 (Bogdanov et al., 1999).The pH values in these samples accorded with the acceptable range for floral honeys.Condalia microphylla honeys had significantly higher pH values than the other honeys. The multivariate analysis offered gave quantitative results for the classification of unifloral honeys in agreement with Ruoff et al. (2007). The PCA and CA showed that selected chemical parameters (colour, EC, free acidity, glucose content, G:W ratio, moisture and pH) provided enough information to develop a botanical classification for honeys studied.Consequently, determination of the chemical properties of unifloral honeys can be a useful tool to complement melissopalynological studies.Quantitative pollen analysis showed that the usual pollen frequency (> 45%) for a correct botanical origin assignment in honey was not valid for the unifloral honeys studied. All the analyzed honeys had excellent quality properties according to Argentinean and International standards (Codex Alimentarius, FAO-OMS, 1990;Bogdanov et al., 1997). The results of this study allowed the classification of three central Argentinean unifloral honeys and justify the use of these parameters with other Argentinean unifloral honeys. Figure 1 . Figure 1. A. Geographical location of La Pampa Province, Argentina; B. La Pampa Province subdivided into departments.The study area is gray. Figure 3 . Figure 3. Dendrogram of cluster analysis (CA) of physicochemical variables (group average method, squared Euclidean distance).Ordinate shows distance units. Figure 5 . Figure 5. Ordination from principal component analysis of 59 honey samples from three botanical origins by seven physicochemical properties.Samples are located in the space of the two first principal components. Table 1 . Table4shows the results (mean, standard deviation and range) obtained from physicochemical analysis of the Honey colour expressed in absorbance and mm Pfund values Table 4 . The physicochemical parameters of the honey samples.Upper line: mean ± SD.Lower line: range (minimum and maximum).Any mean in a row followed by different letters are significantly different Figure 2. Dendrogram of cluster analysis (CA) of honey samples (group average method, Euclidean distance).Ce, Centaurea solstitialis; P, Prosopis spp.; Co, Condalia microphylla.Ordinate shows distance units.
2018-12-12T20:34:21.620Z
2008-12-01T00:00:00.000
{ "year": 2008, "sha1": "d45594ad16153e3a3fb0f4171391b43e61a8af77", "oa_license": "CCBY", "oa_url": "https://revistas.inia.es/index.php/sjar/article/download/351/348", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d45594ad16153e3a3fb0f4171391b43e61a8af77", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
237513639
pes2o/s2orc
v3-fos-license
A toolbox for elementary fermions with a dipolar Fermi gas in a 3D optical lattice There has been growing interest in investigating properties of elementary particles predicted by the standard model. Examples of such studies include exploring their low-energy analogs in condensed matter system, where they arise as collective states or quasiparticles. Here we show that a toolbox for systematically engineering the emergent elementary fermions, i.e., Dirac, Weyl and Majorana fermions, can be built in a single atomic system composed of a spinless magnetic dipolar Fermi gas in a 3D optical lattice. The designed direction-dependent dipole-dipole interaction leads to both the basic building block, i.e, in-plane p+ip superfluid pairing instability and the manipulating tool, i.e, out-of-plane Peierls instability. It is shown that the Peierls instability provides a natural way of tuning the topological nature of p+ip superfluids and thus transform the fermion's nature between distinct emergent particles. Our scheme should open up a new thrust towards searching for elementary particles through manipulating the topology. Fundamental particles are either the building blocks of matter, called fermions, or the mediators of interactions, called bosons. These elementary particles can be understood within the framework of the relativistic quantum field theory [1,2], such as Dirac, Weyl and Majorana fermions [3][4][5][6][7]. However, only Dirac fermions have been observed as elementary particles in nature so far. For many years now, another promising approach to observe particle properties that have no realization in elementary particles is the investigation of their low-energy analogous quasiparticles, such as in condensed matter or atomic systems. It paves a new way for exploring fundamental particles without paying the steep price of a high-energy particle collider and thus has attracted tremendous research interests in various fields of physics. There have been several exciting progresses on searching the emergent Dirac, Weyl and Majorana fermions. Recent examples include graphene [8][9][10][11][12][13], several topological phases in solids containing quantum Hall states, topological insulator/superconductor and etc. [14][15][16]. Another exciting perspective is the recent realization of artificial materials, such as cold atoms [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35], or photonic crystals [36,37]. However, finding a single material that can systematically transform the fermion's nature and realize distinct elementary fermions in its equilibrium state is highly non-trivial and still stands as an obstacle to be yet overcome. Here we show that ultracold gases of magnetic dipolar atoms or polar molecules, as presently developed in the laboratory, provide us new opportunities for constructing a toolbox for systematically engineering all three kinds of elementary fermions listed above. The attractiveness of this idea rests on the fact that the strength and even the sign of dipolar interaction in cold atoms are highly tunable [38]. The direction of dipole moments can be fixed by applying an external magnetic field. Let the external field be orientated at a small angle with respect to the xy-plane and rotate fast around the z-axis. The time-averaged interaction between dipoles is isotropically attractive in the xy-plane and repulsive along the z-direction. * Electronic address: liubophy@gmail.com Such a scheme has been realized in the experimental system of dysprosium atoms [39]. In general, the xy-plane attraction is expected to cause superfluid pairing instability and leads to a in-plane p + ip superfluid in a spinless dipolar Fermi gas. The repulsion should restrict the pairing along z-direction and results in the Peierls instability in the presence of lattice potential. Such a spontaneously formed density modulation provides a natural tool to manipulate the topological nature of p+ip superfluids and thus allows us to build a toolbox for systematically engineering all the above three kinds of elementary fermions through tuning the topology of our proposed single atomic system. Such a heuristically argued result is indeed confirmed by our detailed analysis through the model to be introduced below. Effective model Consider a spinless dipolar Fermi gas, such as 161 Dy [40,41] or 167 Er [42,43], subjected to an external rotating magnetic field B(t) = B[ẑ cos ϕ+sin ϕ(x cos Ωt+ŷ sin Ωt)], where Ω is the rotation frequency, B is the magnitude of magnetic field, the rotation axis is z. ϕ is the angle between the magnetic field and z-axis. In strong magnetic fields, dipoles are aligned parallel to B(t). With fast rotations, i.e., Ω being much larger than typical frequencies of particle motion, and simultaneously much smaller than level splitting in the field, the effective interaction between dipoles is the timeaveraged interaction V (r) = d 2 (3 cos 2 ϕ−1) with the magnetic dipole moment d. r is the vector connecting two dipolar particles, and θ is the angle between r and the z-axis. The effective in-plane attraction is created by making cos ϕ < 1/3, which can be realized by changing the amplitudes of static and rotating parts of magnetic field. We further consider these dipolar atoms loaded in a three dimensional optical lattice V opt (r) = −V 0 [cos 2 (k Lx x) + cos 2 (k Ly y)] − V 0z cos 2 (k Lz z), where k Lx , k Ly and k Lz are wave vectors of laser fields and the corresponding lattice constants are defined as a x = π/k Lx , a y = π/k Ly and a z = π/k Lz . Here V 0 and V 0z are lattice depths in the xy-plane and z-direction, respectively. In this work, we consider an anisotropic 3D lattice with a x = a y ≡ a < a z . When the lattice depths are large enough, k z a z >/2,k Ⅲ ∪k Ⅲ , /2 k z a z k Ⅲ , k Ⅲ k Ⅲ | 0.25 To study the many-body instabilities of the Hamiltonian Eq. (1), we have performed both mean-field theory and Monte Carlo (MC) simulations. By employing the selfconsistent Hartree-Fock-Bogoliubov (HFB) method, the zerotemperature phase diagram is obtained as shown in Fig. 1(a). Furthermore, distinct many-body phases in Fig. 1(a) were identified through MC simulations. In the HFB method, the Peierls instability can be described by writing the density distribution of the system as n i = n 0 + C cos(Q · R i ), where Q represents the periodicity of density pattern and n 0 = i c † i c i /N L is the average filling with N L denoting total lattice site. The order parameter describing this charge density wave (CDW) can thus be defined as We also introduce the superfluid pairing order parameter as ∆(k) = 1 .. > means the expectation value in the ground state. Through minimizing the mean-field thermodynamic potential, order parameters defined above can be obtained (see details in Supplementary Materials (SM)). We find that the pair-ing order parameter ∆(k) behaves like in-plane p + ip superfluids and can be described as ∆(k) = ∆(sin(k x a) + i sin(k y a)). At the same time, we also find that the CDW order is highly tunable through simply varying the average filling. For instance, there is a region of n 0 , i.e., 0.5 ≤ n 0 < n A ≈ 0.67, where Q is located at (0, 0, π/a z ), indicating the period of z-directional density modulation being 2a z . When further increasing n 0 , i.e.,n A < n 0 < n B ≈ 0.85, the period of z-directional CDW order can be changed to 3a z , characterized by Q fixed at (0, 0, 2π/3a z ). To further verify the existence of CDW and superfluid orders, we have performed a variational Monte Carlo (VMC) [44][45][46][47][48] calculation on an 8 × 8 × 8 lattice system with periodic boundary condition. Their existence can be captured by the long-ranged saturation behavior of the in-plane superfluid pairing correlation P (R ) and the peak of structure factor S(Q), respectively (see details in SM). It is shown that the results of the VMC simulation are consistent with the mean-field calculation, such as shown in Fig. 1(a). Physical mechanism for tuning the topological nature To understand how we can utilize the spontaneously formed density modulation as a tool to manipulate the topological nature of the system, let us start with our basic building block, i.e., in-plane p + ip superfluids. It is known that the topology of 2D p + ip superfuilds can be changed by tuning the system filling and thus result in distinct topological regions: topological trivial and two distinct topological regions with opposite chirality [49]. In our proposed system, the spontaneously formed z-directional density modulation can serve as a natural tool to tune the fillings of p + ip superfluid layers through effectively altering their respective chemical potential and thus changes their topological nature, which is confirmed by our detailed analysis below. For instance, let us consider the case of the period of z-directional density modulation being 2a z , i.e, Q = k y a-π k x a-π k Ⅲ Г (0, 0, π/a z ). The topology of the system can be understood through the Bogliubov-de Gennes (BdG) Hamiltonian. After applying a series of unitary transformations (see SM for details), the BdG Hamiltonian can be expressed in a clearer way as where E π 1 (k) = ξ k +ξ k+Q 2 − 4δ 2 + ( ξ k −ξ k+Q 2 ) 2 and E π 2 (k) = ξ k +ξ k+Q 2 + 4δ 2 + ( ξ k −ξ k+Q 2 ) 2 . ξ k = ε k + Σ k − µ with the band energy ε k = −2t(cos k x a + cos k y a) − 2t z cos k z a z . The Hartree-Fock self-energy is given by H π BdG clearly shows that the topology of the system can be engineered by simultaneously manipulating the two effective Hamiltonians H π p−wave and H π p−wave , describing inplane p + ip superfluids. This can be naturally achieved via the spontaneously formed density modulation in our proposed scheme. To show this, let us consider the Hamiltonian H π BdG at a fixed k z . The topologically distinct regions of p + ip superfluids Hamiltonian H π p−wave (H π p−wave ) are (i) µ (µ ) < −4t and µ (µ ) > 4t , which are the topological trivial region, (ii) −4t < µ (µ ) < 0 and 0 < µ (µ ) < 4t , which are the topological regions with opposite chirality. Here µ = µ− 4δ 2 + (−2t z cos k z a z ) 2 and µ =μ+ 4δ 2 + (−2t z cos k z a z ) 2 with the effective hoppings t α = t α − Σ α /2, t ≡ t x = t y andμ = µ − V (0)n 0 . Note that to simplify the discussion we consider the strongest exchange interaction energy between nearest neighbors as Σ x(y) = k 2J N L cos(k x(y) a)n k and Σ z = − k 4Ja 3 N L a 3 z cos(k z a z )n k , with J ≡ |d 2 /(ta 3 )| capturing the strength of dipolar interaction. The two effective chemical potentials µ and µ can be tuned simultaneously by changing the CDW order δ via varying J, and thus can manipulate the topology of the system. For example, considering the case of J = 0.8 in Fig. 1(a), the CDW order δ simultaneously tunes the two effective chemical potentials in different regions: (i) −π/2 ≤ k z a z < −k c I 0.39π or k c I < k z a z < π/2, where 0 < µ (µ ) < 4t , H π p−wave and H π p−wave are thus simultaneously engineered in the same topological region with Chern number C = −1, (ii) −k c I < k z a z < k c I , where −4t < µ < 0 and 0 < µ < 4t , H π p−wave and H π p−wave are thus tuned in topological regions with opposite chirality characterized by C = ±1. Therefore, a topological phase (phase I in Fig. 1(a)) characterized by the existence of a topological phase transition between two topological regions with C = −2 and C = 0 along the k z -axis is achieved. While increasing J, for instance J = 1.2, the CDW order increases and plays a dominant role in tuning effective chemical potentials. Distinct from smaller J, here for each k z within the Brillouin zone (BZ), the effective chemical potentials are set in the same region as −4t < µ < 0 and µ > 4t . H π p−wave and H π p−wave are thus simultaneously engineered in a topological region with C = 1 and a non-topological region with C = 0, respectively. Therefore, a new topological phase (phase IV in Fig. 1(a)) characterized by a uniform Chern number (C = 1) is obtained. Using the same approach, other two topological phases (phase II, III in Fig. 1(a)) can be determined. We also find that there is a threshold of J, below which δ = 0, the p-wave superfluid is favored. We can map out the zero-temperature phase diagram as shown in Fig. 1(a). Such an analysis is readily generalizable to the case Q = (0, 0, 2π/3a z ). Another four distinct topological phases have been obtained (see details in SM), indicating that our scheme provides a systematical way of engineering the topological nature of the system. A toolbox for engineering elementary fermions We now show how to engineer three kinds of elementary fermions, i.e., Dirac, Weyl and Majorana fermions, through manipulating the topology of the system. As shown in Fig. 1(a), in topological phases I and III, there is a topological phase transition occurring along the k z -axis. For example, in phase III the phase boundary along the k z -direction corresponds with the emergence of two gapless points in quasiparticle excitations at (π, π, ±k c III ), as shown in Fig. 2(a). It turns out that these two gapless points are Weyl nodes. Close to the Weyl nodes, the effective Hamiltonian takes the form of a 2×2 Hamiltonian describing chiral Weyl fermions, which can be expressed as ∂kz | k=(π/a,π/a,k c III /az) (k z a z −k c III )σ z ≡ d·σ (see SM for details). The quasiparticle energy dispersion is linear around Weyl points. As shown in Fig. 2(b) and (c), the Weyl nodes are hedgehog-like topological defects of the vector field d, which are the point source of Berry flux in momentum space, with a topological invariant N C = ∓1. N C is defined by N C = 1 24π 2 µνγχ tr Σ dS χḠ ∂Ḡ −1 ∂kµḠ ∂Ḡ −1 ∂kνḠ ∂Ḡ −1 ∂kγ , whereḠ −1 is the inverse Green's function for the quasiparticle excitation,Σ is a 3D surface around the isolated gapless points and tr stands for the trace over the relevant particle-hole degrees of freedom. Quasiparticle excitations near the gapless points realize the long-sought low-temperature analogs of Weyl fermions originally proposed in particle physics. These Weyl nodes are separated from each other in momentum space. They cannot be hybridized, which makes them indestructible, as they can only disappear by mutual annihilation of pairs with opposite topological charges, which is distinct from the spectral-gap protection in insulating topological phases. Furthermore, as shown in Fig. 1(a) when varying J, the system undergoes phase transitions between various topological phases. The phase boundaries correspond with the gap closing in quasiparticle excitations. Interestingly, we find that these gapless points develop low-temperature analogs of Dirac topological defect. For example, when considering the phase boundary between III and IV, the effective Hamiltonian around the gapless point can be expressed as H π ef f = −∆(k x a − π)σ x − ∆(k y a − π)σ y ≡ h · σ (see details in SM). As shown in Fig. 3(b), the vector field h forms a vortex structure in the momentum space. At the vortex core, the length of the vector vanishes, indicating the gap closing in quasiparticle excitations. Therefore, it forms a Dirac topological defect, which is confirmed from the calculation of the winding number W = 1 2π dk hx |h| ∇ hy |h| − hy |h| ∇ hx |h| being equal to −1. Besides hosting quasiparticles analogous to Dirac and Weyl fermions, our proposed set-up can also serve as a tool for systematically engineering both paired and unpaired Majorana fermions. For example, in phase IV there are two zero-energy states as shown in Fig. 4(a) when k z a z = −π/2. The corresponding wavefunctions (see details in SM) satisfy the relation u 0 k,iy = −v 0 * k,iy [u 0 k,iy = v 0 * k,iy ] on the left [right] edge ( Fig. 4(c)). These eigenstates thus support one localized Ma- Estimates of the critical temperature In current experiments, for example, when considering 161 Dy in a lattice with the lattice constant a = 225nm, where the dipolar interaction strength can be tuned as J = 3, the critical temperature of our proposed phases, such as shown in Fig. 1(a), can reach around 0.1nK (see SM for details). Furthermore, taking advantage of recent experimental realization of Feshbach resonance in magnetic lanthanide atoms [50,51], the dipole-dipole interaction becomes highly tunable. The critical temperature can be estimated to reach around 10nK or even higher, making our proposal promising for experimental realization. Conclusion We have shown how to construct a toolbox for systematically engineering three distinct emergent elementary fermions by tuning the topology of the system. A new link between searching fundamental particles and topological phenomena, such as p + ip superfluids, was built. It thus paves a new way for transforming the fermion's nature between various elementary fermions, with applications ranging from fundamental physics to quantum computing.
2019-04-19T08:23:00.000Z
2019-04-19T00:00:00.000
{ "year": 2019, "sha1": "5025b8d4430a1248f27b93030e0df64c548f4a6d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1904.09118", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5025b8d4430a1248f27b93030e0df64c548f4a6d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225897778
pes2o/s2orc
v3-fos-license
Job Satisfaction and the Associated Factors Amongst Nurses In Southeastern Nigeria: Cross Sectional Study Background: Job satisfaction is a significant indicator of the way nurses feel about their profession, the efforts to perform their professional duties, or otherwise abandons it willingly. Method: cross-sectional research design approach was used to assess the job satisfaction and the associated factors among 300 hundred nurses. Data was analyzed using descriptive statistics and kruskal wallis test for association between the socio-demographic variables and job satisfaction at significance level of 0.05 Result: About 1/3 of the respondents (31%) reported gross dissatisfaction with their job, 0% reported being well satisfaction while (68.7%) respondents reported moderate satisfaction with their job. Across items on the scale, gross dissatisfaction was noted on key managerial factors and the salary of the workers. Job satisfaction was associated with specialty (p<0.018), gender (P<0.002) and age (P<0.000) of Nurses. Conclusion: majority of the respondents were moderately satisfied with their job but grossly dissatisfied with salary and administrative roles like communication flow. Background of the Study Job satisfaction has been defined as the feelings and attitude of people towards their work [1]. It is a significant indicator of the way nurses feel about their profession, the efforts to perform their professional duties, or otherwise abandons it willingly [2]. It also refers to how well a work provides gratification of a need or how well it provides a means of self-gratification. Job satisfaction is an important element in keeping the work force of any organization. Lack in professional pleasure not only hinders the pace of work, but also can adversely affect the individuals as in burnout effect [3]. Optimum commitment and maximal output in any occupation depends on the level of satisfaction derived from the work or job by the employees. Job satisfaction among nurses is vital to health care delivery since they form the largest workforce and play pivotal roles in healthcare. This is because satisfied nurses are endowed with the physical and emotional dexterity and the efforts needed to perform their tasks [4]. This would enhance the quality of care provided to the patient. It has been empirically argued that organizations cannot be at their best until workers are committed to the organizational goals and objectives. Such commitment can only be achieved via job satisfaction [5]. As the largest workforce across the world, they are very vital key players if health for all and sustainable development goals must be achieved globally [6]. Nurses are mainly the only healthcare workers readily accessible to millions of people in lifetime in most countries of the world [6]. Addressing the Nurses' satisfaction with their job is a very crucial matter especially in developing countries [7]. In most developing countries, nurses are the strength of the health care delivery system. They are key players to achievement and promotion of any health program in any country [6]. As affirmed by Ahmad and Abu Raddaha [8] there are increasing number of healthcare workers such as nurses that are voluntarily leaving their countries or profession caused by numerous factors amongst which is job dissatisfaction. Job dissatisfaction and burnout has been a recurrent and serious problem among Nurses and other workers in different part of the world but more common in the third world countries leading to many nurses and other health professionals quitting their jobs for better ones in the developed countries like America [9]. Recent study has shown a direct correlation between staff satisfaction and patient satisfaction in health care organizations [10]. Job satisfaction of nurses is interrelated with the quality of health care, patient satisfaction with the services they receive, patient compliance and continuity of care [11]. Moreover, dissatisfaction leads to increased absenteeism, lower productivity and increased turnover each of which raises cost to the medical system [11]. Considering the fact that the level of job satisfaction affects not only the quality of the roles performed by the nurse, but also patient satisfaction with care, it is very important for healthcare institutions to measure these Perceptions by the nurses and do the needful to improve it [2]. Satisfaction with job can be influence by numerous factors amongst which are adequate staffing, competitive remuneration, healthy work environment, career advancement opportunities, adequate workload, mutual and friendly supervision, obvious patients' health improvement and others [12]. African as a continent is currently facing serious human resource crises in the health sector [13]. For instance South Africa has 0.818 physicians per 1000 patients and 5.229 nurses and midwives per 1000 patients. This is reportedly the second highest after Libya [14]. An estimated 1/5 African healthcare workers migrate to the western world for employment purposes. About 10,684 African physicians left the continent in 2005, 13,584 in 2015 for greener pastures in the European countries [15]. The reason for the above is poor financing of the health sector, poor remuneration, lack of career advancement and poor work environment [13]. According to WHO [16] worst health workers shortage is recorded in Asia and Africa with about 4.3million doctors, midwives and nurses shortage with in African countries and must be overcome before universal health coverage can be achieved. These severe human resource shortages have affected the ability of many countries to initiate and sustain credible health services. Although several reform and policies have been developed to address health problems in the continent, little attention has been given to creating a desirable workplace that will lead to greater job satisfaction [10]. Healthcare system of Nigeria is one of the weakest in Africa despite being the most populous country in the continent [17] . It is a country with few doctors and nurses and poor health indicators when compared to other countries [10]. It is also a country with incessant strike action amongst health care workers thereby making access to health services by the masses very difficult. Nigeria has 15 nurses per 10,000 persons, Physicians 3.83 per 10,000 persons [18]. Nigeria is one of the countries with the poorest health indices in the world with maternal mortality rate of 1350 /100000 live birth, under-5 mortality rate of 109/1000 live birth and account for 10%of the world disease burden yet nurses and other healthcare professionals emigrate the country in good numbers to western countries in search greener pasture, healthy work environment, and positive career growth [13]. Yearly above 600 general practitioners in Nigeria migrate to other western countries especially Europe and North America, about 13% of registered nurse and midwives in Nigeria migrate to other countries leaving the healthcare of the country in shambles [14]. The major cause or reason for the mass migration of health workers from most African countries is the poor working condition, lack of security, poor financing of the healthcare sector, and need for positive progression in the workers' career [19]. For the above situation to be improved and universal health coverage for all achieved, the few health workers must be satisfied, sustained and the healthcare environment improved for better healthcare delivering. This begins with research evidence on the job satisfaction among health workers which is non-existent in Nigeria. It is in the light of the above that this study intends to assess job satisfaction and associated factors among nurses in Southeastern Nigeria Purpose of the Study The purpose of this study is to assess the job satisfaction amongst nurses at Federal health facility in Abakaliki. Specific Objective of the Study This study seeks to 1) Determine the level of job satisfaction among nurses at FETHA 2) Identify the factors associated with the job satisfaction amongst nurses at FETHA Design A cross sectional descriptive design was adopted for this research. An institution based survey was conducted from September to December 2019 among 300 nurses with work experience of at least six months in the Abakaliki federal health facility. The method is considered appropriate because it gives current and immediate information about the situation under study. Study Area The study was carried out at Federal health facility Southeastern Nigeria. Federal healthcare facility is the only hospital in Abakaliki metropolis and one of the biggest in Southeastern Nigeria. This is the only government owned hospital in the state of Ebonyi catering for the health needs over five million people and others with few nurses. Study Population This study assessed the job satisfaction of nurses in the federal health facility in Abakaliki Southeastern Nigeria. A total of 300 nurses with at least six months working experience in the facility were included in the study. Sampling Method A total population study approach using 300 nurses was done because of the small number of nurses. Job satisfaction scale for assessment of levels of job satisfaction was used this study. Ethical Consideration An ethical approval was got from the ethical committee of the hospital under study. The consent of the respondents was obtained before the study. Instrument for Data Collection The instrument for this study has two parts: part A deals with the demographic variables of the respondents while part B is the job satisfaction scale. The SOGO survey scale was used to assess the employee's job satisfaction. It is a seven likert scale with responses ranging from disagrees completely to completely agree. Reliability of the Instrument The reliability of the instrument was established through pilot study using 20 nurses from Ikwo general hospital, a hospital different from this study area. The reliability was established through test retest method, Cronbach alpha value of 0.78 was gotten showing that the tool is reliable. Method of Data Collection Data for this study were collected using Job Satisfaction Scale. The scale has two parts: part A deals with the socio-demographic variables while part B deals with job satisfaction. Following ethical approval for the study, the researcher introduced himself to the respondents, obtained their consent following explanation of the purpose of the study and administered copies of the questionnaire to the respondents on the same day on face to face basis. The researcher was assisted by two trained research assistants. All the questionnaires were retrieved on the same day giving rise to 100% return rate. Method of Data Analysis Data was collected and analyzed using SPSS Version 24. Descriptive statistics were used to summarize the data on the demographic variables. Data for research objective one was first computed and the mean score categorized into levels of 16-32 (dissatisfied), 33-48 (moderate satisfaction) and 49-64 (Well satisfied) [12]. Kruskal wallis test was used to test the association between the demographic factors and job dissatisfaction. Data Quality Assurance The quality of the collected data was assured through training of the the research assistants on the use of the scale, approaches to the respondents and other related issues. Following the training, the level of assimilation of the research assistants were assessed using 20 nurses in Ikwo healthcare Centre. After which the data were analyzed and appropriate modifications made before the main study. The researcher supervised the data collection properly and ensured responses to the tools were on individual basis not as group as was used in similar study [6]. Based on the socio-demographic characteristics of the respondents (Table 1), the study shows that majority of the respondents are females (75.3%). In terms of years of experience, a greater proportion of the participants have more than 4 years in nursing practice. The specialty of the respondents showed theatre nurse (4.7%), general nursing (36.0%), intensive care nurse (11.3%), mental health nursing (13.3%), and emergency nursing (20.3%). The current position of the participants; nursing officer I (20.0%), nursing officer II (20.0%), principal nursing officer (20.3%), assistant chief nursing officer (20.0%) and chief nursing officer (19.7%). In the study also, the educational level of the participants showed RN are the majority with quarter of the participant belonging to this category followed by RN&RM category. In terms of age, 56.7% were between 18 to 39 years of age as in table 1 above. Overall, out of the total 300 respondents, majority are moderately satisfied (68.7%) in relation to their job satisfaction while 31.3% of the respondents are dissatisfied and 0% is well satisfied with their job as shown in table 2 below. In terms of job satisfaction across the items in the scale, 53.6% agreed with "Hospital clearly conveys its mission to the staff" while 46.4% disagreed strongly with it. 72% of the participants disagree with the item on "There is good communication from manager to employees" 91.3% was dissatisfied with their monthly salary, 83.3% reported gross lack of necessary equipment to work with and others as summarized in table 3. Generally, there was variation among nurses' level of job satisfaction across socio-demographic variables (table 4). The specialty of the nurse was significantly associated with the level of job satisfaction (X 2 =13.662 [5] , p value < 0.018), age groups (X 2 =36.197 [10], p< 0.000), gender (X 2 = 16.423 [10], p<0.002) while other variables like education, rank and year of experience had no significant association with their job satisfaction. Discussion This study revealed that majority of the respondent nurses(68.7%) that work in the area of this study are moderately satisfied with their job in overall but a very large number remained dissatisfied(31.3%) and (0%) nurses reported being well satisfied. This finding is not really good for a healthcare system that has to cater for numerous populations in the country and the nearby states. It is also very bad for a teaching hospital that is meant to train health professionals. This finding implies poor quality of care received by the client and decreased hospital output since job satisfaction is linked to the quality of care rendered and the productivity of the hospital. This may better explain the high rate at which nurses and other healthcare workers are leaving the country health system to others in the western countries [13,14]. It therefore demands urgent attention of the, government, managing director and his team to improve the level of job satisfaction of the nurses who are the first point of call to the hospital as to better the care received by the patients. This finding is in line with that of Gyang, et al. [20] in a study of job satisfaction among healthcare workers in Nigeria in which 64.8% of the workers were satisfied with their job. However, this finding was lower than 98.1% reported of similar study in kano state Nigeria [21]. These differences may be attributed to managerial tactics, methodology of the study and cultural diversity. Across the items on the scale, nurses were grossly dissatisfied with the way and matter the hospital administrators conveys the hospital policy, the communication flow within the hospital, the salary of the nurses, the availability level of the equipment required to perform their duties, the benefit offered by the hospital, 78% reported that there is low morale amongst the workers, lack of necessary training for their job was also reported across the respondents and the supervisors were reportedly not promoting spirit of team work. Every worker needs some form of verbal or other motivations such as praises, accolades and others to be inspired to work more [6]. This is very critical since morale of the workers is linked to their job performance and the quality of care rendered. Secondly the healthcare work is a team work that cannot be done in isolation from others, therefore for effectiveness and efficiency to be enhanced in the healthcare system, the spirit of team work must be maintained. This agrees with the findings of Gyang, et al. [20] in a similar study of job satisfaction among health workers in which the workers were grossly dissatisfied with similar items in the scale of measurement. However, the nurses showed a higher level of satisfaction with their supervisors and the level of cordiality with their co-workers. Moreover, the level of job satisfaction among healthcare workers was revealed to have significant association with some of the demographic features such as age, gender and specialty. Across the specialties of the nurses studied, statistically significant association was noted between job satisfaction and the specialty (P<0.018). The age of the respondents was also significantly associated with their level of job satisfaction with (p< 0.000). These findings therefore imply that the level of satisfaction of the respondent is in one way or the other dependent on the specialty/department where they work and also the age of the said respondent. The hospital management should therefore in attempt to improve the working condition of the workers and their job satisfaction consider the specialty area and the age mix of the workers since this affect their job satisfaction. This finding is similar to that of Amoran, et al. [22]; Gyang, et al. [20] in which the age of the individuals were significantly associated with job satisfaction (P<0.000) with younger adults being more satisfied than workers above 40 and 44 years of age. However, the years of work experience, the educational level, position/ rank and sex of the individual workers were not significantly associated with the level of job. This does not agree with the findings of a similar study in which those workers with higher years of work experience were reported to be more satisfied than the other [20]. Conclusion The job satisfaction amongst nurses in southeast Nigeria is moderate as reported by 68.7% of the respondents. However 31.3% dissatisfaction was also noted with 0% reporting well satisfaction with nursing job at the area. Statistical test revealed that the specialty, gender and the age of the respondents had significant association with the level of job satisfaction but the level of education, sex, rank and years of work was not correlated with the workers' level of job satisfaction. Limitations of the Study This is limited by the fact that it focused on nurses in the Federal healthcare facility in Abakaliki. In view of the above, the findings of this study may not be exactly applied to Nurses in other healthcare setting. The data presented in this study is based on the respondents' subjective view as such it may be over emphasized or underreported. The scale being in English language based on the assumption that nurses can understand English hence not translated to local dialects may have caused misunderstanding of the concepts under study. Implication for Further Studies More research studies should be carryout on the level of job satisfaction among other group of healthcare workers in the same hospital as to provide holistic evidence to support move for policy adjustment. Other nurses in other hospitals in the zone should also be assessed as to compare the findings and possibly provide more specific advice to the hospital management based on facts. Further study on the patients' satisfaction with the quality of care in this same hospital is implicated to further assess the true situation of healthcare. Clinical Implication The health care system of the country and other developing countries with similar finding should swing into action to improve the working condition of the nurses and other healthcare workers as to improve healthcare delivery to the citizens. The management of the hospitals should critically appraise their managerial roles especially communication flow to enhance cordiality at work and team spirit.
2020-07-23T09:08:19.536Z
2020-04-27T00:00:00.000
{ "year": 2020, "sha1": "0013b7d9ad9979283e3100191bf90f0ac9bab8df", "oa_license": "CCBY", "oa_url": "https://arpgweb.com/pdf-files/ijhms6(4)64-73.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f976acab691decf3ea9940b866fc52c9a723b69a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography" ] }
268598715
pes2o/s2orc
v3-fos-license
Sex, Body Mass Index, and APOE4 Increase Plasma Phospholipid–Eicosapentaenoic Acid Response During an ω-3 Fatty Acid Supplementation: A Secondary Analysis Background The brain is concentrated with omega (ω)-3 (n–3) fatty acids (FAs), and these FAs must come from the plasma pool. The 2 main ω-3 FAs, docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA), must be in the form of nonesterified fatty acid (NEFA) or esterified within phospholipids (PLs) to reach the brain. We hypothesized that the plasma concentrations of these ω-3 FAs can be modulated by sex, body mass index (BMI, kg/m2), age, and the presence of the apolipoprotein (APO) E-ε4 allele in response to the supplementation. Objectives This secondary analysis aimed to determine the concentration of EPA and DHA within plasma PL and in the NEFA form after an ω-3 FA or a placebo supplementation and to investigate whether the factors change the response to the supplement. Methods A randomized, double-blind, placebo-controlled trial was conducted. Participants were randomly assigned to either an ω-3 FA supplement (DHA 0.8 g and EPA 1.7 g daily) or to a placebo for 6 mo. FAs from fasting plasma samples were extracted and subsequently separated into PLs with esterified FAs and NEFAs using solid-phase extraction. DHA and EPA concentrations in plasma PLs and as NEFAs were quantified using gas chromatography. Results EPA and DHA concentrations in the NEFA pool significantly increased by 31%−71% and 42%−82%, respectively, after 1 and 6 mo of ω-3 FA supplementation. No factors influenced plasma DHA and EPA responses in the NEFA pool. In the plasma PL pool, DHA increased by 83%−109% and EPA by 387%−463% after 1 and 6 mo of ω-3 FA supplementation. APOE4 carriers, females, and individuals with a BMI of ≤25 had higher EPA concentrations than noncarriers, males, and those with a BMI of >25, respectively. Conclusions The concentration of EPA in plasma PLs are modulated by APOE4, sex, and BMI. These factors should be considered when designing clinical trials involving ω-3 FA supplementation. This trial was registered at clinicaltrials.gov as NCT01625195. Introduction The brain contains 24% of phospholipids (PLs) in dry matter [1].Docosahexaenoic acid (DHA) represent 90% of the omega (ω)-3 (n-3) fatty acids (FAs) in the brain [2].Brain PLs enriched in ω-3 FAs are involved in neurodevelopment, membrane fluidity, neuronal transmission, and neuroinflammation, among other functions [3][4][5].Epidemiologic studies support that consuming a diet rich in ω-3 FAs decreases risk of developing dementia and Alzheimer disease (AD) [6][7][8][9].ω-3 FAs, especially eicosapentaenoic acid (EPA) and DHA, are synthesized by our body at rates of ~5% and <0.5%, respectively, from their precursor, the essential fatty acid α-linolenic acid [10].Therefore, the conversion of these 2 active forms is less efficient than direct dietary absorption [11].Previous studies have shown that there are physiologic, environmental, and genetics factors such as carrying the ε4 allele of the apolipoprotein (APO) E (APOE4) [12,13], which is the major genetic risk factor of late-onset AD, can modify ω-3 FA metabolism [12].However, these studies investigated either plasma total lipids or total PL profile at a specific time during the study and not in specific plasma pools targeting the brain.To reach the brain, preformed DHA or EPA need to circulate in the blood either as nonesterified fatty acids (NEFAs) or esterified FAs in PLs [1,5,14].Hence, in this study, we sought to investigate whether factors such as age, BMI, carrying an APOE4 allele, or sex modify the plasma EPA and DHA concentrations in the NEFA and PL pools.Hence, the objectives of this study were to evaluate the increase of EPA and DHA concentrations in PL and NEFA plasma pools after an ω-3 FA supplementation and to investigate whether sex, BMI, age, and the APOE4 status modify DHA and EPA concentrations in PLs and NEFAs while participants being supplemented with an ω-3 FA supplement or a placebo. Study design and participants This is a secondary analysis of a randomized controlled trial previously published [15].The randomized controlled trial was a 6-mo randomized placebo-controlled trial on ω-3 FA supplementation and cognitive performance throughout adulthood.The study was conducted at the Research Center on Aging, Centre Int egr e Universitaire de Sant e et des Services Sociaux, Sherbrooke, Canada.The Research Ethics Committee of the CIUSSSE-CHUS approved the study protocol.This study is registered on https://clinicaltrials.gov with the registration number NCT01625195.All participants gave their written consent to participate.In the study, 193 healthy participants, aged between 20 and 80 y, completed the 6-mo supplementation trial.Plasma samples from 189 of 193 participants were available for retrospective analysis of plasma lipids.The placebo group consisted of 97 participants, and the ω-3 FA supplemented group comprised 92 participants. Procedures and products All inclusion and exclusion criteria are detailed in the original article of the clinical study [15].Fasting blood samples were collected at the baseline (T0) and at 6 mo (T6) to evaluate the before and after supplementation concentrations of EPA and DHA in NEFA and PL pools.Blood samples were also collected at 1 mo (T1) to evaluate the short-term increase following initiation of the ω-3 FA supplementation.All plasma samples were obtained from participants after fasting for 12 hours prior to blood sampling.Plasma biochemistry was conducted as previously described [15] and included concentrations of fasting glucose, total cholesterol, triglycerides (TGs), cholesterol in HDLs (HDLs), cholesterol in LDLs (LDLs), and glycated hemoglobin. The ω-3 FA supplementation consisted of 4 capsules of fish oil per day providing 1.7 g EPA and 0.8 g DHA esterified in ethyl ester for 6 mo.Participants were instructed to take 2 capsules at breakfast and 2 at dinner.The placebo consisted of high-oleic acid soybean and corn oil in a 1:1 ratio with no EPA and DHA (4 capsules/d). Lipid plasma extraction and separation of lipid classes Samples were stored on ice before being centrifuged at 1700g for 10 min at 4 C. Plasma samples were aliquoted and stored at À80 C until further analysis.A total of 189 samples were available for each time point (T0, T1, and T6).All samples were coded with a random number to maintain the experimenter blinded from treatment and time point.Prior to the extraction of lipids from plasma samples, internal standards were added, allowing quantification of the different lipids.This standard included 0.062 mg 1,2-dipentadecanoyl-sn-glycero-3-phosphocholine (C15:0; Avanti) and 0.0072 mg lignoceric acid (C24:0; Sigma) added to 200 μL thawed plasma samples.Plasma total lipids were then extracted using the chloroform-methanol solution (2:1; vol/vol) [16].Total lipid extract was then reconstituted in 200 μL of chloroform after evaporation under N 2 stream.Solid-phase extraction was used to separate PLs and NEFAs from the other plasma lipid classes, as previously described by Chouinard-Watkins et al. [17] and Kaluzny et al. [18].In brief, 200 μL of the lipid extract was transferred to the Bond Elut NH 2 cartridge (Agilent) previously conditioned with 3 mL hexane.Lipid fractions were then eluted using different eluent with different polarity.Neutral lipids were eluted first with 6 mL of chloroform-isopropanol solution (2:1; vol/vol), PLs were eluted with 2 mL chloroform, NEFAs were eluted with 2.25 mL of ethyl ether-acetic acid (98.7:1.3;vol/vol), and both eluates were collected.The system allowed 8 samples at a time, and extractions were conducted under normal gravity.Purity of the fractions was validated using thin-layer chromatography as previously described [19].PLs and NEFAs fractions were methylated with 3 mL BF 3 -MeOH (14% Sigma-Aldrich) at 90 C for 30 min.PLs were reconstituted in 400 μL hexane and NEFAs in 100 μL prior being transferred into vials. Analyses were performed using gas chromatography (model 6890; Agilent), using a BPX-70-fused capillary column (SGE, 50 m).Detection was performed by a flame ionization detector with the same parameter settings described by Chevalier et al. [20].The chromatogram analysis was performed using the OpenLab CDS ChemStation.Absolute concentrations were generated with the help of the internal standard and calibration curves with relative response factors made with the FA standard Nu Chek_prep GLC-569B. Statistical analysis The sample size used in this study was based on the plasma available from the main clinical study; we included 97 participants in the placebo group and 92 in the ω-3 FA supplementation group.To confirm that there would be adequate statistical power, the sample size was calculated based on data from a previous study in our research group with EPA and DHA concentrations in PL and NEFA pools [17].Calculations were done to reach 80% statistical power with a type 1 error of 5%.The sample size was estimated to !25 per group.The anthropometric characteristics were analyzed using an unpaired t test if data were normally distributed and, if not, a Mann-Whitney nonparametric was used.A P value of <0.05 was considered statistically significant.The primary outcomes were the EPA and DHA concentrations in PL and NEFA plasma pools in participants under treatment (ω-3 or placebo).The secondary outcomes were to determine whether δ over baseline, often referred as plasma response, in EPA and DHA concentrations in plasma PLs and NEFAs of ω-3 FA supplementation were modulated by sex, BMI, age, and APOE4 status.For the primary outcome, a 2-way ANOVA where time (baseline and 1 and 6 mo of supplementation) and treatment (placebo or ω-3) was performed to investigate the interaction between these factors on the dependent variables and their individual effects.Time was included in the model as a repeated measure.For the secondary outcomes, 3-way ANOVA were performed where time, treatment, and factors (sex, BMI, APOE4 status, and age) were performed to test the interaction of the factors in modulating the response in ω-3 FAs (DEPA and DDHA) in PLs and NEFAs.Statistical analyses were performed using GraphPad Prism version 9.3.1 for Windows. Results Plasma samples of 189 participants were analyzed at each time point unless stated; 97 participants were allocated to the placebo group and 92 to the ω-3 FAs group.Of the 189 participants, 126 were females and 63 were males.Moreover, 89 participants had a BMI of >25, 59 participants were younger than 40 y and 71 were older than 60 y.Forty-one participants carried !1 ε4 allele of the APOE (APOE4 carriers) (Figure 1).Further information regarding the number of participants in each APOE genotype is provided in Supplemental Table 1. Participants' clinical characteristics by age sex, BMI, and APOE4 status are described in Table 1.In the whole cohort, the mean age was 49.8 AE 16.2 y, and the average BMI was 26.1 AE 4.9.Blood biomarkers were all within clinical normal range.When splitting participants by the different factors analyzed, plasma HDL cholesterol concentrations were higher in females than those in males (P < 0.0001), whereas glucose concentrations were higher in males than those in females (P ¼ 0.0358).Those with a BMI of >25 were older and had higher plasma concentrations of TGs and glucose and lower HDL cholesterol concentrations (P < 0.0001) than those with a BMI of 25.Regarding the APOE4 status, LDL cholesterol concentrations were significantly higher in carriers compared with those in noncarriers.Finally, in participants older than 60 y, BMI and plasma concentrations of TGs, LDL cholesterol, and glucose were significantly higher than those aged younger than 40 y.Further breakdowns of participants' clinical characteristics based on treatment and sex, treatment and age, treatment and BMI, and treatment and APOE4 status are provided in Supplemental Tables 2-5, respectively. Primary outcome There was a significant time  supplement interaction for DHA and EPA concentrations in plasma PLs and NEFAs, supporting that the increase was higher in the ω-3 FA than that in the placebo group (Figure 2).There was also a sharp increase of EPA and DHA concentrations in PLs and NEFAs between baseline and 1 mo of supplementation.The PL-EPA concentration reached a concentration 387% higher than that at baseline after the supplementation of ω-3 FAs for 6 mo (Figure 2B), and the PL-DHA concentration reached 83% over baseline (Figure 2A).A post hoc analysis supported that, at baseline, the concentrations of PL-DHA was significantly higher in the placebo group than those in the ω-3 group (P ¼ 0.0012).At 1 and 6 mo of supplementation, PL-DHA and PL-EPA were significantly higher (P < 0.0001) in the ω-3 FA-treated group than those in the placebo group. In the NEFAs, the ω-3 FA-supplemented group had an increase in DHA and EPA concentrations by 31% and 42%, respectively, after 1 mo and by 71% and 82%, respectively, after The concentrations in NEFA-EPA and NEFA-DHA pools were statistically different between 1 and 6 mo in the ω-3 FA-treated group, with P values of 0.0079, and <0.0001, respectively.There were no differences in the placebo group.At 1 and 6 mo of supplementation, NEFA-DHA and NEFA-EPA concentrations were significantly higher in the ω-3 FA-treated group than those in the placebo group (P < 0.0001). Secondary outcomes In plasma PLs, δ over baseline concentrations of EPA were 33% and 26% higher in female than those in males 1 and 6 mo under the ω-3 FA supplementation, supporting higher response in females than that in males for a similar ω-3 FA dose and duration (P int ¼ 0.0004 and P sex < 0.0001).δ over baseline of PL-DHA was not different by sex (P int ¼ 0.4543) (Figure 3A, B). After 1 and 6 months of ω-3 FAs supplementation, δ over baseline of EPA in PLs were 24% and 15% higher in participants with a BMI < 25 compared to participants with a BMI > 25 (Figure 3D) (P BMI ¼ 0.0109 and P int ¼ 0.0097).δ over baseline of DHA in the PLs was not different by BMI groups (Figure 3C,D). δ over baseline of EPA in the PLs differed in those carrying an APOE4 allele compared to the noncarriers (Figure 3E,F).There was a genotype by diet interaction (P ¼ 0.0228) where the APOE4 carriers had a 14% and 28% higher δ over baseline of PL-EPA after 1 and 6 months of ω-3 FAs supplementation compared to the noncarriers.δ over baseline concentrations of EPA and DHA in the NEFAs were not statistically different by sex, BMI, APOE4 status, and age (Figure 4). Discussion This study is a secondary analysis of a previously published clinical trial on ω-3 FA supplementation and cognitive performance [15].In this secondary analysis, the primary objective was to investigate the concentrations of EPA and DHA in the plasma pools targeting the brain: the NEFAs and PLs after 6 mo of supplementation.We found a significant time  supplement interaction for DHA and EPA concentrations in plasma PLs and NEFAs, supporting that the increase is higher in the ω-3 FA-treated group than that in the placebo group. This result was expected and similar to that found by Vidgren et al. [21], who examined the concentration of ω-3 FAs within the plasma pools of healthy men, supplemented with fish oil containing approximately similar to ω-3 FA dose provided in this study but duration was half that of this study.In the study by Vidgren et al. [21], the increases in PL-EPA and PL-DHA concentrations were similar to what we obtained in this study. Another study also found approximately similar increase in EPA and DHA concentrations [22] with an ω-3 FA supplementation of >2 g/d for !12 weeks.Hence, with a supplement containing twice the concentration of EPA than DHA than that of this study, the increase in PL-EPA was 2-3 times the baseline concentration, whereas for DHA, it was <1 time the baseline plasma PL-DHA concentration [22]. In this study, the ω-3 FA supplement EPA concentration was higher than that of DHA, so the increase in PL-EPA was expected to be higher than that of PL-DHA.However, the ratio of increase in EPA to the dose given was twice that of DHA.This could be interpreted as DHA being more efficiently removed from blood circulation through lipoprotein lipase activity [23].Indeed, it was reported that lipolysis of DHA by lipoprotein lipase activity is occurring at a higher and faster rate than that of EPA, leaving more EPA in chylomicron remnants.Therefore, EPA remains transiently for a longer period in the lipoproteins than DHA, which can change PL synthesis in the liver [23,24].To support this concept, we previously showed that EPA and DHA concentrations esterified in plasma TGs increased by 4-fold and 2-fold, respectively, after an ω-3 FA supplementation, whereas the increase was by 4-fold and <1-fold in cholesterol esters [25].Therefore, FA metabolism differs by the type of FA supplemented, but there are also other physiologic, environmental, and genetics factors that can contribute to modify responses to an ω-3 FA supplementation.In this study, the increase in EPA and DHA concentration was higher in PL pool than that in the NEFA pool.The reason for this lies in the fact that when ω-3 FAs are consumed, their concentrations rise within the complex lipid pools of the bloodstream, including TG, cholesterol esters, and PLs.However, the increase in the circulating NEFAs is modest [26].This is primarily because NEFAs found in circulation predominantly originate from the breakdown of TG stored in adipose tissue, and these TGs contain minimal amounts of EPA and DHA [27,28].A study reported that in adipose tissues, the increase of EPA after a 2-g/d EPAþDHA for 6 weeks was from 0.12% to 0.13%, and for DHA, it increased from 0.27% to 0.3 [29].This corresponds to a 7% and 10% increase, respectively, whereas the increase in EPA and DHA in the plasma is ~30% for NEFA-DHA to that of 463% for EPA-PL as found in this study. In this study, we also sought to investigate whether sex, BMI, age, and the APOE4 status specifically modify the EPA and DHA responses in NEFAs and PLs after an ω-3 FA supplementation.We found that response of PL-EPA was modified by sex, BMI, and the APOE4 status.Our data support that females had higher PL-EPA responses at 1 and 6 mo under an ω-3 FA supplementation than males.These findings are consistent with previous studies that have reported sex-specific differences in ω-3 FA metabolism [30][31][32].The hypothesis explaining this result was related to the presence of female sex hormones, similar to that reported in other studies [33][34][35].Participants in this study included females of premenopause and postmenopause ages (range, 20-80 y) with different hormonal status, and hence, the population recruited in this study was heterogeneous.Information regarding menopausal status or use of hormone replacement therapy was unfortunately not collected in this study.However, our results do not support a difference in EPA or DHA between men and females at different ages.This suggests that at a population level, menopausal status and/or the use of hormone replacement therapy did not have a major contribution on EPA and DHA concentrations.Other mechanisms could be implicated such as a higher turnover of PL-EPA in men than that in females. Another factor that we previously identified as being involved in ω-3 FA supplementation response is BMI [17].Like we previously reported, those with a BMI of 25 had a higher plasma increase from baseline in PL-EPA after 1 and 6 mo of supplementation than those with a BMI of >25.However, in this study, there was a significant difference in PL-DHA concentrations by BMI categories.In the study by Chouinard-Watkins et al. [17], there was a significant BMI effect for the PL-EPA (P ¼ 0.018) where participants with a BMI of 25.5 had 63% greater change from baseline than those with a BMI of >25.5.Our previous study and this study were conducted in 2 different countries with different background diets, which emphasized that BMI is a factor modifying plasma PL response to ω-3 FA supplementation. Other groups also reported a lower response of EPA in those with higher BMI, and this was potentially dose independent [36].Indeed, Yee et al. [37] reported in 48 females that BMI was correlated with a lower increase in EPA and DHA concentration for any given dose provided during the supplementation (0.84, 2.52, 5.04, and 7.56 g EPAþDHA).Hence, the lower increase in PL-EPA concentration in those with a high BMI is perhaps because of the higher use or removal from the plasma in this population or a combination of both factors.One explanation is that those with higher BMI usually have chronic low-grade inflammation [38][39][40][41].Notably, the oxylipin profiles in obese individuals compared with those in lean participants showed decreased concentrations of ω-3 FAs-derived oxylipins, including 14,15-DiHETE and 17,18-DiHETE oxylipins [42].In addition, obese individuals had higher concentrations of ω-6-derived oxylipins.It is essential to note that these findings were not within the context of ω-3 FA supplementation.Therefore, further investigations are warranted to determine whether the observed differences in oxylipin profiles contribute to the lower plasma PL-EPA responses in individuals with higher BMI following ω-3 FA supplementation because there is currently a lack of studies comparing oxylipin profiles in obese and lean participants under such conditions.Our group also previously reported that at baseline, older adults have higher plasma concentrations of EPA than younger adults [43][44][45].However, in this study, in contrast to a previous article by our group where the increase in DHA in the plasma of older adults was 42% higher than the increase of DHA in younger adults, the increase in DHA and EPA in plasma PLs were not different by age [44].However, in the study by Vandal et al. [44], only men were included in the study, whereas this study included both males and females, which could explain partly the divergence in the results.Hence, there might be sex  age interactions in the response for ω-3 FA supplementation, but this study was not statistically powered to investigate this interaction. Finally, the APOE4 status on the response to ω-3 FA supplementation was investigated.APOE4 carriers had a higher increase in PL-EPA concentrations than noncarriers after the ω-3 FA supplementation, whereas PL-DHA change from baseline was not modified by APOE4 status.This result does not align with a previous study from our group where APOE4 carriers were lower responders to supplementation than noncarriers [46].In the study by Plourde et al. [46], the change from baseline in EPA and DHA concentrations in the plasma NEFAs and TGs were lower in APOE4 carriers after the ω-3 FA supplementation although the ω-3 dose was similar than that in this study (1.9 g EPA/d and 1.1 g DHA/d for 6 wk).This result support that there is a need to understand better the metabolism of ω-3 FAs in carriers and noncarriers of the APOE4 because it could be involved in the risk of developing cognitive decline during aging [12,47]. Overall, our findings indicate that sex, BMI, and carrying the APOE4 allele modify plasma PL-EPA response when participants are being supplemented with ω-3 FAs.Therefore, these factors may modulate the efficiency of ω-3 FA metabolism, their transport, or their utilization in different subgroups of individuals.Hence, these factors should be carefully considered when designing trials where the primary outcome is the increase in PL-EPA concentration.PL-EPA, specifically lysophosphatidylcholine-EPA, was found to significantly increase the concentrations of EPA in the brain by 100-fold, and DHA by 2-fold, compared with NEFA-EPA [48].In comparison with lysophosphatidylcholine-EPA, NEFA-EPA increased EPA concentrations more in adipose tissue than that in the brain.[48].A recent study in 2023 found that phosphatidylserine (PS) EPA had neuroprotective effects on primary hippocampal neurons [49].The authors of that study found that incubation of hippocampal nerves with PS-EPA increased p-CREB (cyclic adenosine monophosphate-dependent response element-binding protein) expression by 159% (P < 0.05), whereas PS-DHA did not have a significant effect.Phosphorylated CREBs improve synaptic plasticity by upregulating synapse-related proteins such as synuclein proteins.Interestingly, in this same study, synuclein expression increased to 130% after incubation with EPA-PS (P < 0.05) but was not significantly affected after incubation with DHA-PS.Moreover, apoptosis Bcl-2 protein that acts as an inhibitor of the apoptotic pathway increased by 92% and 21% in the EPA-PS-treated and DHA-PS-treated neurons, respectively [49].These findings emphasize the significant role of various PL-EPA species in preventing and mitigating neurodegeneration and suggest potential preventive measures for neurodegenerative diseases like AD. The mechanisms explaining why the different factors we selected (sex, BMI, APOE status and age) change the response in EPA and DHA concentrations are not fully understood; however, others have also reported similar results [50,51].Therefore, clinical trials should include enough participants within each factor to be able to stratify their data into responders or lower responders and perhaps adjust the treatment according to these factors to make every one benefit from the ω-3 FA supplementation.Whether a higher dose or a longer duration are required for the lower responders should be questioned because a better understanding of ω-3 FA metabolism should be emphasized.This secondary analysis had strengths and limitations.The sample size was large enough to study the different factors with enough statistical power.Another strength is that the data are expressed in concentrations rather than relative percentages that could be biased by the number of FAs that are identified, as stated in the study by Brenna et al. [52].This will allow better comparisons of the actual concentrations with other clinical trials.Regarding limitations, this study was performed only on plasma samples, and acquiring red blood cell or tissue concentrations would have confirmed that the same factors change the ω-3 concentrations in other tissue/cells.We also acknowledge that potential bias regarding incomplete data set might have occur because only 189 of the 243 participants randomly assigned were analyzed, hence excluding 54 participants from the data set.However, for 43 of 54 participants, samples were not available because the participants dropped out during the study.Moreover, 7 of the 54 participants were no longer eligible to start the trial as identified in a second stage of revision of eligibility criteria by the research team.Finally, for 4 of the 54 participants, plasma samples were no longer available in our storage.Hence, these missing data were not removed from the data set by the research team because these were never generated.Although this reduced the reduced sample size, we still had enough statistical power.However, this might have limited the generalizability of our results and affected the precision of P values. In conclusion, this study provides evidence that ω-3 FA supplementation effectively increases EPA and DHA concentrations in both NEFA and PL plasma pools.Moreover, plasma PL-EPA responses to the ω-3 FA supplementation were modified by sex, BMI, and APOE4 status.Therefore, these factors should be considered when designing clinical trials where the PL-EPA concentration is important to the research question. FIGURE 1 . FIGURE 1. Flow chart of the clinical study and analyzed plasma samples. FIGURE 2 . FIGURE 2. Docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) concentrations (mg/L) in phospholipids (PLs) and nonesterified fatty acids (NEFAs) plasma compartments after a 6-mo supplementation of 1.7 g EPA and 0.8 g DHA with plasma samples at baseline, 1 mo, and 6 mo.The data are expressed means AE SD.Of 189 participants, 97 were allocated to the placebo group and 92 to the ω-3-supplemented group.A repeatedmeasure 2-way ANOVA was performed to compare placebo and ω-3 groups and time points and their interaction (supplement  time). TABLE 1 Clinical characteristics of the participants.1 Abbreviations: APO, apolipoprotein; HDL-C, high density lipoprotein cholesterol; LDL-C, low-density lipoprotein cholesterol; TG, triglyceride.1 Results are means AE SD when data were normally distributed and median (Q1, Q3) when not normally distributed.Means between males and females were compared using an unpaired t test when normally distributed, and medians were compared using a Mann-Whitney nonparametric test when data were not normally distributed.2 Unpaired t test used.3 Mann-Whitney nonparametric test used.
2024-03-22T15:50:03.086Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "57ed30e073112bafa43fa6eb7f96c83881b6818f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.tjnut.2024.03.013", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1d4eff036faf3e73dfb80932b50d96e42b1896a2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
103646428
pes2o/s2orc
v3-fos-license
Evolution of Calcareous Deposits and Passive Film on 304 Stainless Steel with Cathodic Polarization in Sea Water The change of protective current density, the formation and growth of calcareous deposits, and the evolution of passive film on 304 stainless steel (SS) were investigated at different potentials of cathodic polarization in sea water. Potentiostatic polarization, electrochemical impedance spectroscopy (EIS), and surface analysis techniques of scanning electron microscopy (SEM), energy dispersive X-ray (EDX) microanalysis and X-ray diffraction (XRD) were used to characterize the surface conditions. It was found that the protective current density was smaller for keeping polarization at −0.80 V (vs. saturated calomel electrode (SCE), same as below) than that at −0.65 V. The calcareous deposits could not be formed on 304 SS with polarization at −0.50 V while it was well protected. The formation rate, the morphology, and the constituent of the calcareous deposits depended on the applied potential. The resistance of passive film on 304 SS decreased at the first stage and then increased when polarized at −0.80 V and −0.65 V, which was related to the reduction and the repair of passive film. For the stainless steel polarized at −0.50 V, the film resistance increased with polarization time, indicating that the growth of oxide film was promoted. Introduction Type 304 stainless steel (SS) is often used for facilities and structures exposed in sea water, and is prone to suffering from localized corrosion like pitting and crevice corrosion due to chloride ion attack [1][2][3].There are many technologies to protect 304 SS from corrosion in sea water, among which cathodic protection is one of the most effective methods [1,4].The potential is an important factor to consider for cathodic protection.By shifting the potential negatively from open circuit potential (OCP), the thermodynamic trends of corrosion reaction can be mitigated, the adsorption of aggressive chloride ions can be held back, and the pH value close to the surface can be elevated [5].All the above results can exert a positive effect on the anti-corrosive performance of 304 SS.However, there is an optimized range of potentials for cathodic protection.The stainless steel cannot achieve effective protection without enough cathodic polarization [1], while it will suffer over-protection when the potential is too negative, even leading to hydrogen embrittlement especially for some high strength materials [6][7][8][9].Although some guidelines on proper potential of cathodic protection can be found in the literature [4,8,[10][11][12], an optimized potential of protection for 304 SS in seawater still needs to be determined with a balanced consideration of cost and effectiveness [1,3,6].Corrosion resistance of stainless steels generally comes from the passive film formed on the surface, which depends on the composition, structure, and electrochemical performance of the film.The passive behavior of stainless steel is affected by many factors, such as types of stainless steel, pre-treatment, electrolytes, and applied potentials [13][14][15][16].The reduction of passive film on stainless steel with cathodic polarization at a certain potential will also lead to high risk of corrosion [17]. Cathodic protection often results in the formation of calcareous deposits on the metal surface in sea water [18][19][20][21][22][23].The cathodic reactions of dissolved oxygen reduction or water reduction to evolve hydrogen at more negative potential will generate hydroxyl ions, with increasing the pH value of the electrolyte adjacent to the metal surface, which will promote the formation of calcareous deposits [19,[21][22][23].Some works indicate that CaCO 3 is saturated at pH 8.7 or even lower in normal sea water and is ready to deposit as the inorganic carbonic equilibrium in the electrolyte is changed, while the critical pH value for Mg(OH) 2 to deposit is about 9.3 [24,25].The composition and structure of calcareous deposits can be influenced in a complex manner by many factors, such as environment, applied polarization, and metals of substrate [5,7,20,25,26].The morphologies of calcareous deposits formed under different conditions of cathodic protection for mild steel in sea water were analyzed in detail by Yang et al. [5], who demonstrated that the deposits consisted of inner layer of co-deposited Mg(OH) 2 and iron oxide due to the presence of corrosion at the early stage of calcareous deposit formation, and the outer layer of CaCO 3 with some Mg(OH) 2 precipitated in the pores or cracks. The insulating calcareous deposits play a very important role in the process of cathodic protection, which can decrease the protective current density, increase the service life of sacrificial anodes, and lower the cost of cathodic protection [27].However, the formation of calcareous deposits is not always desirable.For example, for moving parts in sea water such as rotating shaft in contact with bearings or hydraulic piston rods, smooth and clean surfaces are needed.In these cases, proper cathodic potential shall be applied not only to protect the parts of stainless steel from corrosion but also to avoid the formation of calcareous deposits on the surfaces. The formation of calcareous deposits is influenced by many factors, such as temperature, hydrostatic pressure (depth), velocity, chemistry of sea water; current density, potential and period of cathodic polarization; biofilms; substrate materials and surface preparation; and so on [26,[28][29][30][31][32].The precipitated mechanism and the protective property of calcareous deposits on carbon steel substrate have been thoroughly studied [18,22,30].However, only few investigations have focused on the surface of stainless steels under cathodic protection [1,26,31,32].The cathodic polarization should affect the oxide film, change the interfacial environment and cause the development of calcareous deposits on the surface, which make the evolution of surface conditions of stainless steel in sea water much more complicated than that of carbon steel.Limited knowledge is acquired with the modifications of passive film in combination with calcareous deposits on stainless steel under different cathodic polarization in sea water.Obtaining a better understanding of the evolution of passive film and calcareous deposits on stainless steel with potential and time of polarization is of great significance to ascertain the effective protection and improve the corrosion resistance of stainless steels in sea water. In the present work, a comprehensive investigation with cathodic polarization at different potentials was carried out on 304 SS in sea water.Electrochemical impedance spectroscopy (EIS) was used to characterize the surface variation of 304 SS under polarization.Scanning electron microscopy (SEM), energy dispersive X-ray (EDX) microanalysis, and X-ray Diffraction (XRD) were used to investigate the formation of calcareous deposits.The evolution of calcareous deposits and passive film on the surface of 304 SS was discussed. Electrode Preparation The test material used in the experiment was 304 SS sheet of 2 mm in thickness from Shanxi Tai Gang Stainless Steel Co. Ltd. (Taiyuan, China) with a composition as follows (wt %): C 0.06, Si 0.54, Mn 0.90, P 0.040, S 0.017, Cr 18.48, Ni 8.06, and Fe balance.The test samples were cut from the sheet into pieces with the dimension of 20 mm × 20 mm × 2 mm, sealed with epoxy resin, leaving an area of 10 mm × 10 mm exposed.The working surface was ground using silicon carbide abrasive paper to 1500 grit, then rinsed with distilled water, and dried in air. Test Solution Test solution was natural sea water from the Yellow Sea taken from the corrosion test site of Qingdao, with a salinity of 33.4‰ and a pH value of 7.6.The solution was in a quiescent condition.The temperature was controlled at (25 ± 1) • C in a water bath, the oxygen concentration was controlled at 8 mg/L by purging certain amounts of oxygen and nitrogen.The oxygen concentration was calibrated by a Polymetron 9582 dissolved oxygen detector (Hach Company, Loveland, CO, USA). Electrochemical Tests Electrochemical tests included potentiodynamic polarization test, potentiostatic test and EIS measurement.The potentiodynamic polarization curve was recorded at a scanning rate of 20 mV/min, in the potential range from about −600 mV (vs.OCP) to +450 mV (vs.OCP).The potentiostatic polarizations were conducted at the potential of −0.50 V (vs.saturated calomel electrode (SCE), same as below), −0.65 V, and −0.80 V respectively, and the changes of current density were measured.EIS spectra were recorded at intervals of cathodic polarization as well as at the beginning without polarization to characterize the surface conditions.The EIS test was taken at the steady potential that the specimen had immediately prior to conducting the measurement, without decaying back to a natural potential [5].The impedance spectra were measured in the frequency range from 100 kHz to 10 mHz with 49 points recorded, the disturbing signal is a sinusoidal wave with an amplitude of ±10 mV.Each single experiment has three parallel samples for testing. All the electrochemical tests were performed using a PARSTAT 2273 workstation (Princeton Applied Research, Oak Ridge, TN, USA) in a conventional three electrodes system with Pt as counter electrode, SCE as the reference electrode and the 304 SS as the working electrode.The data were analyzed with C-View and ZSimpWin softwares (Princeton Applied Research, Oak Ridge, TN, USA). SEM and XRD Analysis The surface morphologies and elements of the samples after cathodic polarization were analyzed using a Zeiss Ultra-55 field-emission scanning electronic microscope (Carl Zeiss, Oberkochen, Germany) equipped with an EDX analyzer.The surface phases of the samples after cathodic polarization were detected using XRD (Bruker D8, Karlsruhe, Germany) by Cu-Kα 1 radiation with λ = 0.15406 nm. Polarization Tests Figure 1 shows a typical potentiodynamic polarization curve of 304 SS in natural seawater.The initial open circuit potential (E corr ) of 304 SS in sea water was −0.36 V.The suitable potential range for cathodic protection of the stainless steel can be determined from the potentiodynamic polarization curve.The most positive potential is the potential E p at which oxygen concentration polarization begins, and the most negative potential E max is the potential when hydrogen evolution occurs.As for 304 SS, the potential of protection ranged from −0.48 V (E p ) to −0.83 V (E max ) derived from Figure 1.To investigate the effect of cathodic protection on the formation of calcareous deposits on the stainless steel, the potentials for the potentiostatic polarization were selected at −0.50 V, −0.65 V, and −0.80 V respectively.The most negative potential is a slightly positive than −0.83 V due to avoiding the effect of hydrogen evolution.The variation of current density with time to maintain a certain potential on 304 SS in natural seawater is shown in Figure 2. At the initial stage of polarization, the protective current density was high, then decreased to a stable value.It took about 80 h to reach a relatively stable current density when the polarization potential was −0.65 V, while it needed only 30 h to achieve stable current density as for the polarization at −0.50 V and −0.80 V.The current density required for maintaining a polarization potential, which was averaged with three parallel samples, was about 5.1 μAcm −2 , 6.5 μAcm −2 , and 3.1 μAcm −2 with polarization at −0.80 V, −0.65 V, and −0.50 V respectively.The smallest current density at −0.50 V shall be attributed to the most positive potential under cathodic polarization.It is interesting that the stable current density needed for polarization at −0.80 V is smaller than that at −0.65 V. Generally, current density is determined by the reaction rate occurring at the interface of electrode.In this 304 SS electrode system, several factors can affect the reaction rate, including the electrochemical reactions, the concentration of reactants and the transportation of reactants and products.The current density will decrease to a stable value as a result of the consumption of reactants and the formation of barrier layer.There are two possible reactions within the tested potentials, one is the reduction of passive film, another is the reduction of oxygen.The transportation The variation of current density with time to maintain a certain potential on 304 SS in natural seawater is shown in Figure 2. At the initial stage of polarization, the protective current density was high, then decreased to a stable value.It took about 80 h to reach a relatively stable current density when the polarization potential was −0.65 V, while it needed only 30 h to achieve stable current density as for the polarization at −0.50 V and −0.80 V.The current density required for maintaining a polarization potential, which was averaged with three parallel samples, was about 5.1 µA•cm −2 , 6.5 µA•cm −2 , and 3.1 µA•cm −2 with polarization at −0.80 V, −0.65 V, and −0.50 V respectively.The smallest current density at −0.50 V shall be attributed to the most positive potential under cathodic polarization.It is interesting that the stable current density needed for polarization at −0.80 V is smaller than that at −0.65 V.The variation of current density with time to maintain a certain potential on 304 SS in natural seawater is shown in Figure 2. At the initial stage of polarization, the protective current density was high, then decreased to a stable value.It took about 80 h to reach a relatively stable current density when the polarization potential was −0.65 V, while it needed only 30 h to achieve stable current density as for the polarization at −0.50 V and −0.80 V.The current density required for maintaining a polarization potential, which was averaged with three parallel samples, was about 5.1 μAcm −2 , 6.5 μAcm −2 , and 3.1 μAcm −2 with polarization at −0.80 V, −0.65 V, and −0.50 V respectively.The smallest current density at −0.50 V shall be attributed to the most positive potential under cathodic polarization.It is interesting that the stable current density needed for polarization at −0.80 V is smaller than that at −0.65 V. Generally, current density is determined by the reaction rate occurring at the interface of electrode.In this 304 SS electrode system, several factors can affect the reaction rate, including the electrochemical reactions, the concentration of reactants and the transportation of reactants and products.The current density will decrease to a stable value as a result of the consumption of reactants and the formation of barrier layer.There are two possible reactions within the tested potentials, one is the reduction of passive film, another is the reduction of oxygen.The transportation Generally, current density is determined by the reaction rate occurring at the interface of electrode.In this 304 SS electrode system, several factors can affect the reaction rate, including the electrochemical reactions, the concentration of reactants and the transportation of reactants and products.The current density will decrease to a stable value as a result of the consumption of reactants and the formation of barrier layer.There are two possible reactions within the tested potentials, one is the reduction of passive film, another is the reduction of oxygen.The transportation processes could be influenced by the barrier layer (calcareous deposits) since the oxygen concentration is controlled in bulk solution in this study.Then, the variation of current density shall be related to the effect of calcareous deposits and the variance of oxide film formed on the stainless steel with polarization at different potentials, which will be discussed later.It is also worth noting that there exists a current density increase at the initial stage for −0.65 V and −0.80 V, which may be attributed to the reduction of passive film on 304 SS [33][34][35][36][37].The prompt decrease of current density initially with polarization at −0.50 V can be attributed to the consumption of oxygen if the passive film is not reduced at this potential [9].Finally, the current densities decreased to a relatively stable value at all the three potentials. EIS Measurement 3.2.1.EIS Evolution of 304 SS with Polarization at −0.80 V EIS was used to monitor the evolution of surface conditions of 304 SS after different period of cathodic polarization in sea water.EIS is a very valuable technique for the indirect characterization of the passive film and the calcareous deposits on the surface of stainless steel [26,[30][31][32]38]. Figure 3 shows the EIS spectra of 304 SS after polarization at −0.80 V in sea water for different times.It can be seen that the EIS evolution can be divided into two stages during the 168 h test period, with at least two obvious semi-circles appeared in the Nyquist plots after 20 h of polarization.To understand the EIS results, equivalent circuits were adopted to analyze the spectra based on the structure of the electrode and the process occurring on the surface as shown in Figure 4.The measured data and the simulation data were presented in the same plot.The simulated lines fit well with the measured data, which means that the equivalent circuits are reasonable.processes could be influenced by the barrier layer (calcareous deposits) since the oxygen concentration is controlled in bulk solution in this study.Then, the variation of current density shall be related to the effect of calcareous deposits and the variance of oxide film formed on the stainless steel with polarization at different potentials, which will be discussed later.It is also worth noting that there exists a current density increase at the initial stage for −0.65 V and −0.80 V, which may be attributed to the reduction of passive film on 304 SS [33][34][35][36][37].The prompt decrease of current density initially with polarization at −0.50 V can be attributed to the consumption of oxygen if the passive film is not reduced at this potential [9].Finally, the current densities decreased to a relatively stable value at all the three potentials. EIS Measurement 3.2.1.EIS Evolution of 304 SS with Polarization at −0.80 V EIS was used to monitor the evolution of surface conditions of 304 SS after different period of cathodic polarization in sea water.EIS is a very valuable technique for the indirect characterization of the passive film and the calcareous deposits on the surface of stainless steel [26,[30][31][32]38]. Figure 3 shows the EIS spectra of 304 SS after polarization at −0.80 V in sea water for different times.It can be seen that the EIS evolution can be divided into two stages during the 168 h test period, with at least two obvious semi-circles appeared in the Nyquist plots after 20 h of polarization.To understand the EIS results, equivalent circuits were adopted to analyze the spectra based on the structure of the electrode and the process occurring on the surface as shown in Figure 4.The measured data and the simulation data were presented in the same plot.The simulated lines fit well with the measured data, which means that the equivalent circuits are reasonable.At the first stage before the 10-h polarization, Figure 4a is considered to be suitable for representing the electrode processes, which are dominated by the corrosion reaction of metal and the effect of passive film [30,39,40].At the second stage after that time, a new semi-circle appeared at the high frequency region in the Nyquist plot as shown in Figure 3a, which is related to the produced calcareous deposits.The equivalent electrical circuit in Figure 4b can be used with three time constants, attributed to the corrosion reaction, the passive film and the imperfect calcareous deposits respectively.As the polarization time increased (after 50 h), the calcareous deposits became complete and compact, leading to the shift of the phase angle peak to higher frequency (see Figure 3b).At this stage, the equivalent circuit in Figure 4c is thought to be more suitable for representing the electrode structure.In the equivalent electrical circuits, Rs is the solution resistance, Rox is the resistance of passive film, Qox is the capacitance of passive film, Rct is the charge transfer resistance, Qdl is the double layer capacitance, Rc is the resistance of calcareous deposits, and Qc is the capacitance of calcareous deposits.The constant phase element Q is used instead of capacitance C to get a more ideal simulation result due to the dispersion effect from the rough electrode surface [30]. The EIS fitted results of 304 SS electrode under polarization at −0.80 V in sea water are listed in Table 1.Rct reflects the corrosion resistance of 304 SS in sea water.The corrosion resistance of stainless steel depends on the protective layer of thin oxide film formed on the surface, which has limited ionic and electronic conductivity, and can significantly lower the electrochemical reaction rates on the surface [2,30].The Rct decreased from 199 kΩ•cm 2 without cathodic polarization to 14.48 kΩ•cm 2 with polarization for 4 h, then gradually increased with the polarization going on, and reached 130.6 kΩ•cm 2 at 168 h after polarization.The fast decrease of Rct at the first stage may be related to the degradation of the oxide film on the surface of 304 SS with cathodic polarization at −0.80 V [41].The following increase of Rct gradually may be attributed to the repair of the passive film because of the alkaline environment formed due to cathodic polarization [42,43].At the first stage before the 10-h polarization, Figure 4a is considered to be suitable for representing the electrode processes, which are dominated by the corrosion reaction of metal and the effect of passive film [30,39,40].At the second stage after that time, a new semi-circle appeared at the high frequency region in the Nyquist plot as shown in Figure 3a, which is related to the produced calcareous deposits.The equivalent electrical circuit in Figure 4b can be used with three time constants, attributed to the corrosion reaction, the passive film and the imperfect calcareous deposits respectively.As the polarization time increased (after 50 h), the calcareous deposits became complete and compact, leading to the shift of the phase angle peak to higher frequency (see Figure 3b).At this stage, the equivalent circuit in Figure 4c is thought to be more suitable for representing the electrode structure.In the equivalent electrical circuits, R s is the solution resistance, R ox is the resistance of passive film, Q ox is the capacitance of passive film, R ct is the charge transfer resistance, Q dl is the double layer capacitance, R c is the resistance of calcareous deposits, and Q c is the capacitance of calcareous deposits.The constant phase element Q is used instead of capacitance C to get a more ideal simulation result due to the dispersion effect from the rough electrode surface [30]. The EIS fitted results of 304 SS electrode under polarization at −0.80 V in sea water are listed in Table 1.R ct reflects the corrosion resistance of 304 SS in sea water.The corrosion resistance of stainless steel depends on the protective layer of thin oxide film formed on the surface, which has limited ionic and electronic conductivity, and can significantly lower the electrochemical reaction rates on the surface [2,30].The R ct decreased from 199 kΩ•cm 2 without cathodic polarization to 14.48 kΩ•cm 2 with polarization for 4 h, then gradually increased with the polarization going on, and reached 130.6 kΩ•cm 2 at 168 h after polarization.The fast decrease of R ct at the first stage may be related to the degradation of the oxide film on the surface of 304 SS with cathodic polarization at −0.80 V [41].The following increase of R ct gradually may be attributed to the repair of the passive film because of the alkaline environment formed due to cathodic polarization [42,43].R c is associated with the calcareous deposits.After 20 h of cathodic polarization at −0.80 V, R c began to appear and increased from 2459 Ω•cm 2 to 7290 Ω•cm 2 at the end of the whole polarization period, which can be ascribed to the formation and growth of the calcareous deposits. The passive film resistance R ox decreased very fast at the first stage of cathodic polarization from 463.9 Ω•cm 2 without polarization down to 15.28 Ω•cm 2 of polarized sample for 10 h, and then increased gradually to 489.6 Ω•cm 2 which is close to the oxide film resistance of unpolarized sample.With further cathodic polarization, R ox continued to increase and got to 725.6 Ω•cm 2 at the end of the polarization test. EIS Evolution of 304 SS with Polarization at −0.65 V Figure 5 shows the EIS evolution of 304 SS under polarization at −0.65 V in seawater.It can be seen that the EIS evolution can be divided into two stages during the 168-h test period, which is similar to that for polarization at −0.80 V. Rc is associated with the calcareous deposits.After 20 h of cathodic polarization at −0.80 V, Rc began to appear and increased from 2459 Ω•cm 2 to 7290 Ω•cm 2 at the end of the whole polarization period, which can be ascribed to the formation and growth of the calcareous deposits. The passive film resistance Rox decreased very fast at the first stage of cathodic polarization from 463.9 Ω•cm 2 without polarization down to 15.28 Ω•cm 2 of polarized sample for 10 h, and then increased gradually to 489.6 Ω•cm 2 which is close to the oxide film resistance of unpolarized sample.With further cathodic polarization, Rox continued to increase and got to 725.6 Ω•cm 2 at the end of the polarization test. EIS Evolution of 304 SS with Polarization at −0.65 V Figure 5 shows the EIS evolution of 304 SS under polarization at −0.65 V in seawater.It can be seen that the EIS evolution can be divided into two stages during the 168-h test period, which is similar to that for polarization at −0.80 V.After 50 h of polarization at −0.65 V, a new dispersed capacitance loop was present in the Nyquist plot in a high frequency region, which was caused by the calcareous deposits.As compared with the EIS spectra at −0.80 V, the corresponding peaks of phase angle positioned close to middle frequency, suggesting the deposits produced with polarization at −0.80 V should be more protective. The equivalent electrical circuits in Figure 4a,c were used to simulate the EIS spectra for different polarization period.The fitted values for elements in the equivalent circuit of 304 SS under polarization at −0.65 V are listed in Table 2.The charge transfer resistance R ct decreased at the first stage and then increased with further cathodic polarization, showing similar trend to that under polarization at −0.80 V.This variation can also be explained with the reduction and repair of passive film during the process of cathodic polarization.At the later stage of polarization, the R ct values at −0.65 V are larger than those at −0.80 V, suggesting the stainless steel polarized at −0.65 V be more corrosion resistant in sea water.The EIS analysis demonstrated that the calcareous deposits began to be formed obviously after 50 h of polarization at −0.65 V.The resistance of the deposits R c increased with polarization from 911 Ω•cm 2 at 50 h to 2861 Ω•cm 2 at 168 h, owing to the growth of the calcareous deposits.However, the R c values at −0.65 V are smaller than those for polarization at −0.80 V, which implies that the deposits formed at −0.65 V are not so protective. The film resistance R ox presented a similar change to the R ct , also decreasing at the first stage and then increasing.This change can be related to the reduction and the repair of the oxide film under polarization at −0.65 V.It is interesting that the sample polarized at −0.65 V has smaller R ox than the one at −0.80 V, probably because the passive film modified under polarization at −0.65 V is more conductive [13].The oxide film with lower resistance and the more porous calcareous deposits make the current density of 304 SS required to maintain polarization at −0.65 V even higher than that at −0.80 V (see Figure 2). EIS Evolution of 304 SS with Polarization at −0.50 V Figure 6 shows the EIS evolution of 304 SS with polarization at −0.50 V in seawater.It can be seen that the EIS spectra have quite similar characteristic throughout the test period.At this polarization potential, the capacitance arc in the Nyquist plot attributed to the calcareous deposits cannot be found. The equivalent circuit in Figure 4a is suitable for representing the electrode process, which is dominated by the corrosion reaction of metal and the change of the passive film.The fitting results of EIS for 304 SS with polarization at −0.50 V are listed in Table 3.As polarization went on, both R ox and R ct increased continuously.It can be inferred that passive film will not be reduced under polarization at −0.50 V [9,15].With the polarization at −0.50 V, the dominant cathodic reaction on the surface of 304 SS is the oxygen reduction.The cathodic polarization can inhibit the adsorption of chloride ions on the surface, so the passive film will not be attacked.The cathodic polarization at −0.50 V will enhance the pH value of the electrolyte adjacent to the surface, which may promote the growth of the oxide film, but not be high enough to lead to the formation of calcareous deposits.The large R ox and R ct values demonstrate that type 304 stainless steel with cathodic polarization at −0.50 V can have good Coatings 2018, 8, 194 9 of 17 corrosion resistance in sea water.Note that there are some differences among the ground samples without polarization for EIS testing immersed in sea water, which indicates that it is difficult to get the same surface condition of passive film on 304 SS with only abrasive grinding. Coatings 2018, 8, x FOR PEER REVIEW 9 of 17 ground samples without polarization for EIS testing immersed in sea water, which indicates that it is difficult to get the same surface condition of passive film on 304 SS with only abrasive grinding. SEM Analysis Figure 7 shows the SEM morphologies of 304 SS after polarization at −0.80 V for 20 h, 70 h, and 168 h.Some scattered crystal particles of calcareous deposit were found on the sample surface after 20 h polarization at −0.80 V.These particles were coarse and in the shape of corn.Only part of the surface was covered by the deposits.The calcareous deposits became complete and covered all the surface of the sample at the polarization time of 70 h, while the deposits formed with polarization for 168 h were more compact and thicker.It seems that the later deposits are growing on the bases of the early formed particles, blocking the gaps between the corn-like particles. SEM Analysis Figure 7 shows the SEM morphologies of 304 SS after polarization at −0.80 V for 20 h, 70 h, and 168 h.Some scattered crystal particles of calcareous deposit were found on the sample surface after 20 h polarization at −0.80 V.These particles were coarse and in the shape of corn.Only part of the surface was covered by the deposits.The calcareous deposits became complete and covered all the surface of the sample at the polarization time of 70 h, while the deposits formed with polarization for 168 h were more compact and thicker.It seems that the later deposits are growing on the bases of the early formed particles, blocking the gaps between the corn-like particles.After 20 h polarization at −0.65 V, a small calcareous deposit can be found only on part of the sample surface as shown in Figure 8.The deposits were composed of fine particles and covered only quite a small area.Therefore, the EIS spectrum with polarization at 20 h did not present the time constant at high frequency related to the calcareous deposits (see Figure 5).With polarization at −0.65 V for 70 h, the calcareous deposits covered the whole surface, where a lot of corn-like coarse particles stacked together.After 168 h of polarization at −0.65 V, the deposits became thicker and the flowerlike top deposits with a lot of small crystals were grown on the base scale. In comparison with the morphology of the calcareous deposits under polarization at −0.80 V, the deposits formed at −0.65 V have a microstructure with more gaps and holes.These porous deposits will decrease the resistance and be less protective, which is consistent with the results of EIS analysis and chronoamperometry measurement. Figure 9 shows the morphology of 304 SS with polarization at −0.50 V for different time.The surface of 304 SS was clean without any corrosion, and no calcareous deposits were found throughout the polarization period at −0.50 V. The surface SEM morphology variation of 304 SS immersed in sea water without polarization was also observed.It was found that pitting corrosion occurred on some 304 SS samples after immersion in sea water for 168 h, because the passive film is easily attacked by chloride ions.The results illustrate that the cathodic polarization at −0.50 V can provide effective cathodic protection for 304 SS exposed in sea water.After 20 h polarization at −0.65 V, a small calcareous deposit can be found only on part of the sample surface as shown in Figure 8.The deposits were composed of fine particles and covered only quite a small area.Therefore, the EIS spectrum with polarization at 20 h did not present the time constant at high frequency related to the calcareous deposits (see Figure 5).With polarization at −0.65 V for 70 h, the calcareous deposits covered the whole surface, where a lot of corn-like coarse particles stacked together.After 168 h of polarization at −0.65 V, the deposits became thicker and the flower-like top deposits with a lot of small crystals were grown on the base scale. In comparison with the morphology of the calcareous deposits under polarization at −0.80 V, the deposits formed at −0.65 V have a microstructure with more gaps and holes.These porous deposits will decrease the resistance and be less protective, which is consistent with the results of EIS analysis and chronoamperometry measurement. Figure 9 shows the morphology of 304 SS with polarization at −0.50 V for different time.The surface of 304 SS was clean without any corrosion, and no calcareous deposits were found throughout the polarization period at −0.50 V. The surface SEM morphology variation of 304 SS immersed in sea water without polarization was also observed.It was found that pitting corrosion occurred on some 304 SS samples after immersion in sea water for 168 h, because the passive film is easily attacked by chloride ions.The results illustrate that the cathodic polarization at −0.50 V can provide effective cathodic protection for 304 SS exposed in sea water. EDX Analysis EDX spectra of 304 SS surface after polarization at −0.80 V, −0.65 V, and −0.50 V for different time were recorded for the marked positions from A to K shown in Figures 7-9.The main elements at the marked locations on the surface of polarized 304 SS are shown in Figure 10. EDX Analysis EDX spectra of 304 SS surface after polarization at −0.80 V, −0.65 V, and −0.50 V for different time were recorded for the marked positions from A to K shown in Figures 7-9.The main elements at the marked locations on the surface of polarized 304 SS are shown in Figure 10.For the position marked A, the elements C, O, Ca, Mg, Fe were detected.Note it has a high Ca peak with tracing Mg element, which means that the deposits on 304 SS with polarization at −0.80 V in sea water for 20 h consist of Ca-containing composites predominantly and few Mg-containing composites.The weak Fe signal may come from the metal substrate.Although the presence of codeposit of Mg(OH)2 together with iron hydroxide as the thin base layer well before precipitation of CaCO3 on steel has been reported in the literature [5], the calcareous deposits formed at −0.65 V without any Mg element indicate that there is no such co-deposit containing Mg and Fe precipitated as a base layer on 304 SS.Hence, the Fe element is not from the deposit itself.For position B without precipitated particles, only obvious signals of Fe, Cr, and Ni were detected, which comes from the 304 SS surface.The position C and D, corresponding to polarization at −0.80 V for 70 h and 168 h respectively, presented somewhat high peaks of Ca and Mg as compared with the position A, which can be ascribed to the growth of the calcareous deposits with cathodic polarization.It should be noted from Figure 10 that only the deposits formed by polarization at −0.80 V contained the Mg element.Usually, magnesium hydroxide precipitates in sea water at a critical pH value as high as 9.3 or more [5,44].This implies that applied polarization at −0.80 V can result in local environment of high pH at the interface of metal and electrolyte, which promotes the deposit of Mg(OH) 2. With polarization at the potential of −0.65 V, the calcareous deposits are the only Ca-containing compound (CaCO3 determined by XRD) without the Mg element (see spectra at location E, F, G, H in Figure 10).The spectrum at the position F is almost the same as the position B. The somewhat high content of Fe and Cr at the position E, which obviously comes from the metal substrate, demonstrates For the position marked A, the elements C, O, Ca, Mg, Fe were detected.Note it has a high Ca peak with tracing Mg element, which means that the deposits on 304 SS with polarization at −0.80 V in sea water for 20 h consist of Ca-containing composites predominantly and few Mg-containing composites.The weak Fe signal may come from the metal substrate.Although the presence of co-deposit of Mg(OH) 2 together with iron hydroxide as the thin base layer well before precipitation of CaCO 3 on steel has been reported in the literature [5], the calcareous deposits formed at −0.65 V without any Mg element indicate that there is no such co-deposit containing Mg and Fe precipitated as a base layer on 304 SS.Hence, the Fe element is not from the deposit itself.For position B without precipitated particles, only obvious signals of Fe, Cr, and Ni were detected, which comes from the 304 SS surface.The position C and D, corresponding to polarization at −0.80 V for 70 h and 168 h respectively, presented somewhat high peaks of Ca and Mg as compared with the position A, which can be ascribed to the growth of the calcareous deposits with cathodic polarization.It should be noted from Figure 10 that only the deposits formed by polarization at −0.80 V contained the Mg element.Usually, magnesium hydroxide precipitates in sea water at a critical pH value as high as 9.3 or more [5,44].This implies that applied polarization at −0.80 V can result in local environment of high pH at the interface of metal and electrolyte, which promotes the deposit of Mg(OH) 2 . With polarization at the potential of −0.65 V, the calcareous deposits are the only Ca-containing compound (CaCO 3 determined by XRD) without the Mg element (see spectra at location E, F, G, H in Figure 10).The spectrum at the position F is almost the same as the position B. The somewhat high content of Fe and Cr at the position E, which obviously comes from the metal substrate, demonstrates that the deposit is thin at 20 h of cathodic polarization.With polarization for 168 h, the deposits of CaCO 3 become thick as no metal element from 304 SS substrate can be detected (see the spectrum of location H).For the samples with polarization at −0.50 V, only elements of Fe, C, Cr, and Ni were found on the surface, implying that calcareous deposit cannot be formed at −0.50 V for the whole test period.This result is consistent with that of Eashwar [31], that calcareous deposit can be precipitated if the potential is more negative than −0.67 V on stainless steel. XRD Analysis Figure 11 shows the XRD patterns of 304 SS electrodes after polarization at potential of −0.80 V, −0.65 V, and −0.50 V for 168 h.The constituent of calcareous deposits formed at −0.65 V and −0.80 V was found to be aragonite (CaCO 3 ).No phase of Mg(OH) 2 was present for the sample with polarization at −0.80 V though a little Mg element was detected by EDX analysis.There was no evidence to show the existing of Ca or Mg related substances on the polarized 304 SS surface after 168 h at −0.50 V, where only the phase in 304 SS was detected.For the samples with polarization at −0.50 V, only elements of Fe, C, Cr, and Ni were found on the surface, implying that calcareous deposit cannot be formed at −0.50 V for the whole test period.This result is consistent with that of Eashwar [31], that calcareous deposit can be precipitated if the potential is more negative than −0.67 V on stainless steel. XRD Analysis Figure 11 shows the XRD patterns of 304 SS electrodes after polarization at potential of −0.80 V, −0.65 V, and −0.50 V for 168 h.The constituent of calcareous deposits formed at −0.65 V and −0.80 V was found to be aragonite (CaCO3).No phase of Mg(OH)2 was present for the sample with polarization at −0.80 V though a little Mg element was detected by EDX analysis.There was no evidence to show the existing of Ca or Mg related substances on the polarized 304 SS surface after 168 h at −0.50 V, where only the phase in 304 SS was detected. Discussion The mechanism for the formation of calcareous deposit under cathodic protection has well been documented [18,[24][25][26]29].The oxygen reduction is the dominating reaction on the metal surface at the potential from −0.50 V to −0.80 V without hydrogen evolution.The reaction of oxygen reduction increases the concentration of OH -in the electrolyte adjacent to the metal surface.As the pH value in the electrolyte exceeds some critical values, the calcareous deposit precipitates on the surface with the formation and growth of nuclei of crystal particles.From SEM observation of the surface of 304 SS with cathodic polarization, it can be found that the calcareous deposit does not precipitate at −0.50 V, indicating that the small polarization current cannot form an enough high pH environment adjacent to the surface.However, the high current density at the first stage of polarization at −0.80 V can easily escalate the pH close to the surface to the critical value, promoting the formation of the calcareous deposits. Cathodic polarization affects not only the calcareous deposit, but also the oxide film on the stainless steel.Generally, the passive film can be formed spontaneously on the surface of stainless steel due to high content of alloy elements of Cr, Ni and so on.After finishing the surface with abrasive paper, there is still a thin oxide film on the surface of the stainless steel sample as revealed by the Rox for the fresh sample before cathodic polarization.The passive film of stainless steel generally has a bilayer structure, with the outermost layer of oxide/hydroxide enriched in iron, and an inner layer predominantly rich in chromium and nickel oxides [43,[45][46][47][48]. Cathodic polarization Discussion The mechanism for the formation of calcareous deposit under cathodic protection has well been documented [18,[24][25][26]29].The oxygen reduction is the dominating reaction on the metal surface at the potential from −0.50 V to −0.80 V without hydrogen evolution.The reaction of oxygen reduction increases the concentration of OH − in the electrolyte adjacent to the metal surface.As the pH value in the electrolyte exceeds some critical values, the calcareous deposit precipitates on the surface with the formation and growth of nuclei of crystal particles.From SEM observation of the surface of 304 SS with cathodic polarization, it can be found that the calcareous deposit does not precipitate at −0.50 V, indicating that the small polarization current cannot form an enough high pH environment adjacent to the surface.However, the high current density at the first stage of polarization at −0.80 V can easily escalate the pH close to the surface to the critical value, promoting the formation of the calcareous deposits. Cathodic polarization affects not only the calcareous deposit, but also the oxide film on the stainless steel.Generally, the passive film can be formed spontaneously on the surface of stainless steel due to high content of alloy elements of Cr, Ni and so on.After finishing the surface with abrasive paper, there is still a thin oxide film on the surface of the stainless steel sample as revealed by the R ox for the fresh sample before cathodic polarization.The passive film of stainless steel generally has a bilayer structure, with the outermost layer of oxide/hydroxide enriched in iron, and an inner layer predominantly rich in chromium and nickel oxides [43,[45][46][47][48]. Cathodic polarization damages the protective layer with reduction reactions of the oxide film, especially the ferric oxide [16,33]. However, the passive film cannot be removed completely from the 304 SS surface even with strong cathodic polarization at the potential of −1.5 V [30].The resistance of passive film is proportional to the thickness of the film and the specific resistivity of the passive layer [30].The reduction of oxides can weaken the stability and reduce the thickness of the film by dissolution.It can also decrease the resistivity of the film with the composition and structure of oxides changed [13].Therefore, it is reasonable to attribute the decrease of R ox at the first stage of polarization at −0.80 V to the reduction of the passive film [34][35][36][37]. When the 304 SS is polarized at the potential of −0.80 V in sea water, the reaction of oxygen reduction on the passivated surface occurs simultaneously with the reduction of the ferric oxide in the passive film [33].The circulation of cathodic current induces a pH shift towards alkaline values which promotes the growth of calcareous deposits [5,15,22] and enhances the repair of the reduced oxide film on the surface [41][42][43]. It is known that stainless steel has excellent corrosion resistance in the alkaline solution with high pH value [43,[49][50][51], which is related to a thin but very protective oxide film formed on the surface.Therefore, the R ox increases with the continuing cathodic polarization at the later stage. It can be seen from the EIS analysis that the surface variation of 304 SS is consistent with the change of current density observed in chronoamperometry experiments at the polarization potentials.With polarization at −0.50 V, the 304 SS can be protected at low current density due to the growth of passive film with high resistance, though calcareous deposits cannot be formed.When the sample is polarized at −0.80 V, the protective current density is lower than that at −0.65 V, but higher than that at −0.50 V, due to the coverage of relatively compact calcareous deposits on the surface.With polarization at −0.80 V and −0.65 V, the 304 SS changes its surface with oxide film reduced at first and then repaired as the adjacent pH value is increased.Once the cathodic protection is interrupted, the calcareous deposit can still protect the covered substrate by isolation effect, while the exposed surface without deposit or under porous deposit will suffer corrosion attacked by chloride ions in sea water.The repassivated surface has better corrosion resistance than the surface with oxide film reduced. According to ISO 15589-2 [52], the recommended potential criteria of cathodic protection for austenite stainless steel like 304 SS are −0.50V as the minimum negative potential and −1.10 V as the maximum negative potential.For stainless steel of high strength like precipitated martensite steel, potential more negative than −0.80 V shall be avoided to prevent hydrogen embrittlement.To obtain proper protection of a stainless-steel propeller shaft-stern tube assembly, Lorenzi et al. used galvanic anodes of carbon steel in order to achieve the required criterion of cathodic protection [1].Sarlak et al. investigated calcareous deposit formed on 316 stainless steel with different cathodic potentials [26], finding that the deposit with polarization at −0.95 V was dominated with Ca-rich phase and a low amount of brucite.At more cathodic values, a less protective deposit with greater amount of brucite usually forms.Eashwar et al. studied the cathodic behavior of stainless steel in coastal Indian seawater and concluded that a potential of about −0.70 V should provide optimum protection of 316 SS in warmer, full strength seawater that supports the precipitation of calcareous deposits [31].The suitable potential of cathodic protection for stainless steel depends on the specific situation.The optimal cathodic protection potential emerging by results presented in this study is −0.50 V for 304 SS in sea water, especially for those moving parts, considering the small current density, high resistance of passive film and no calcareous deposit.It should be noted that this result was derived during laboratory testing, and the situation for engineering application appears more complicated because the optimization of potential is influenced by many other factors, such as flow rate, temperature and chemistry of sea water, and biofouling. Conclusions Type 304 SS was polarized in sea water at −0.80 V vs. SCE, −0.65 V vs. SCE and −0.50 V vs. SCE.Chronoamperometry, EIS, SEM, EDX, and XRD were used to investigate the evolution of calcareous deposits and passive film on the surface.The conclusions can be drawn as below. • Type 304 SS can be protected effectively from corrosion with cathodic polarization at all the tested potentials.The current density needed for keeping the polarization at −0.80 V vs. SCE was smaller than that for maintaining the polarization at −0.65 V vs. SCE, in relation to the formation of more compact calcareous deposits and the higher resistance of the passive film.This investigation suggests that among the tested potentials, the optimal one for cathodic protection of 304 SS in sea water is −0.50 V vs. SCE, especially for moving parts, with a compromise among the effects over the passive film, calcareous deposits, and protective current density. • The analyses by EIS, SEM, EDX and XRD demonstrated that calcareous deposits were formed on 304 SS at −0.80 V vs. SCE and −0.65 V vs. SCE, not at −0.50 V vs. SCE.A longer polarization was needed to produce calcareous deposits at −0.65 V vs. SCE than that at −0.80 V vs. SCE. The deposits formed at −0.80 V vs. SCE consisted of CaCO 3 predominantly and a small amount of Mg-containing substances, while the precipitates produced at −0.65 V vs. SCE contained only CaCO 3 .The CaCO 3 phase was aragonite. • With polarization at −0.80 V vs. SCE and −0.65 V vs. SCE, the resistance of passive film on 304 SS decreased initially and then increased, in relation to the reduction of the oxide film and its successive repair.For the stainless steel with polarization at −0.50 V vs. SCE, the film resistance increased with polarization time, indicating that the oxide film is not reduced at this potential.The dominant cathodic reaction at −0.50 V vs. SCE is the oxygen reduction, which elevates the pH value adjacent to the surface, promoting the growth of the oxide film. Figure 2 . Figure 2. Variation of current density with potentiostatic polarization of 304 SS at different cathodic potentials in seawater. Figure 2 . Figure 2. Variation of current density with potentiostatic polarization of 304 SS at different cathodic potentials in seawater. Figure 2 . Figure 2. Variation of current density with potentiostatic polarization of 304 SS at different cathodic potentials in seawater. Figure 3 . Figure 3. EIS evolution of 304 SS with polarization at −0.80 V in sea water.Nyquist plot (a), Bode plot (b,c), in which symbols are measured data, lines are simulated. Figure 3 . Figure 3. EIS evolution of 304 SS with polarization at −0.80 V in sea water.Nyquist plot (a), Bode plot (b,c), in which symbols are measured data, lines are simulated. Figure 4 . Figure 4. Equivalent electrical circuits (EECs) used for simulating the EIS spectra of 304 SS in seawater after cathodic polarization at different potential.EEC used at the first stage of polarization (a), at the second stage with imperfect calcareous deposits (b), and at the later stage with complete calcareous deposits (c). Figure 4 . Figure 4. Equivalent electrical circuits (EECs) used for simulating the EIS spectra of 304 SS in seawater after cathodic polarization at different potential.EEC used at the first stage of polarization (a), at the second stage with imperfect calcareous deposits (b), and at the later stage with complete calcareous deposits (c). Figure 5 . Figure 5. EIS evolution of 304 SS under polarization at −0.65 V in sea water.Nyquist plot (a), Bode plot (b,c), in which symbols are measured data, lines are simulated data. Figure 5 . Figure 5. EIS evolution of 304 SS under polarization at −0.65 V in sea water.Nyquist plot (a), Bode plot (b,c), in which symbols are measured data, lines are simulated data. Figure 6 . Figure 6.EIS evolution of 304 SS under the polarization potential of −0.50 V in sea water.Nyquist plot (a), Bode plot (b,c), in which symbol are measured data, line are simulated data. Figure 6 . Figure 6.EIS evolution of 304 SS under the polarization potential of −0.50 V in sea water.Nyquist plot (a), Bode plot (b,c), in which symbol are measured data, line are simulated data. Figure 7 . Figure 7. Surface morphology of 304 SS with polarization at −0.80 V in sea water for 20 h (a), 70 h (b) and 168 h (c). Figure 7 . Figure 7. Surface morphology of 304 SS with polarization at −0.80 V in sea water for 20 h (a), 70 h (b) and 168 h (c). Figure 11 . Figure 11.XRD patterns of 304 SS after cathodic polarization for 168 h in sea water at different potentials. Figure 11 . Figure 11.XRD patterns of 304 SS after cathodic polarization for 168 h in sea water at different potentials. Table 1 . Fitting parameters for EIS spectra of 304 SS under polarization at −0.80 V. Table 1 . Fitting parameters for EIS spectra of 304 SS under polarization at −0.80 V. Table 2 . Fitting parameters for EIS spectra of 304 SS under polarization at −0.65 V. Table 3 . Fitting parameters for EIS spectra of 304 SS under polarization at −0.50 V. Table 3 . Fitting parameters for EIS spectra of 304 SS under polarization at −0.50 V. R s (Ω•cm 2 ) Q dl (µF•cm 2 ) n 1 R ct (kΩ•cm 2 ) Q ox (µF•cm 2 ) n 3 R ox (Ω•cm 2 ) Coatings 2018, 8, x FOR PEER REVIEW 13 of 17 that the deposit is thin at 20 h of cathodic polarization.With polarization for 168 h, the deposits of CaCO3 become thick as no metal element from 304 SS substrate can be detected (see the spectrum of location H).
2019-04-09T13:10:50.088Z
2018-05-21T00:00:00.000
{ "year": 2018, "sha1": "73b43e8d27955c0914a9005bea26229cf4ddccec", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/8/5/194/pdf?version=1527149527", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "73b43e8d27955c0914a9005bea26229cf4ddccec", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
234781182
pes2o/s2orc
v3-fos-license
Klebsiella MALDI TypeR: a web-based tool for Klebsiella identification based on MALDI-TOF mass spectrometry Motivation Klebsiella species are increasingly multidrug resistant pathogens affecting human and animal health and are widely distributed in the environment. Among these, the Klebsiella pneumoniae species complex (KpSC), which includes seven phylogroups, is an important cause of community and hospital infections. In addition, the Klebsiella oxytoca species complex (KoSC) also causes hospital infections and antibiotic-associated haemorrhagic colitis. The unsuitability of widely used clinical microbiology methods to distinguish species within each of these species complexes leads to high rates of misidentifications that are masking the true clinical significance and potential epidemiological specificities of individual species. Results We developed a web-based tool, Klebsiella MALDI TypeR, a platform-independent and user-friendly application that enables uploading raw data from MALDI-TOF mass spectrometer to identify Klebsiella isolates at the species complex and phylogroup levels. The tool is based on a database of previously identified biomarkers that are specific for either the species complex, individual phylogroups, or related phylogroups, and is available at https://maldityper.pasteur.fr. Results We developed a web-based tool, Klebsiella MALDI TypeR, a platform-independent and userfriendly application that enables uploading raw data from MALDI-TOF mass spectrometer to identify Klebsiella isolates at the species complex and phylogroup levels. The tool is based on a database of previously identified biomarkers that are specific for either the species complex, individual phylogroups, or related phylogroups, and is available at https://maldityper.pasteur.fr. -Introduction Recent works pointed out the difficulties that microbiology laboratories encounter in the identification of individual members of the K. pneumoniae and K. oxytoca species complexes (Brisse et al., 2004;Seki et al., 2013;Fonseca et al., 2017;Long et al., 2017). Indeed, each species complex encompasses several closely related phylogroups, which correspond to species or subspecies with few discriminatory markers (Rodrigues et al., 2018;Merla et al., 2019). As a consequence, the true clinical significance of these phylogroups and their potential epidemiological specificities are poorly defined. The current Bruker Biotyper Matrix Assisted Laser Desorption Ionization -Time of Flight (MALDI-TOF) mass spectrometry (MS) database comprises reference spectra only for K. pneumoniae, K. variicola and K. oxytoca (Berry et al., 2015;Long et al., 2017;Dinkelacker et al., 2018). However, recent work demonstrated the ability of MALDI-TOF MS to generate discriminant peaks at phylogroup level (Rodrigues et al., 2018;Merla et al., 2019;Dinkelacker et al., 2018). Here, we designed Klebsiella MALDI TypeR in order to leverage these biomarkers for improved accuracy and resolution of Klebsiella species identification from MALDI-TOF data. -Features Klebsiella MALDI TypeR is an application with a web interface designed with R and R Shiny. A user can upload either a single spectrum or multiple spectra (a maximum number of spectra is not defined, but we recommend working with batches of 100 spectra per analysis). The uploaded spectra must be in zip archive format. Each archive may correspond either to a single raw spectrum or several raw spectra of the same isolate (technical replicates). For better results, we recommend using at least 3 technical replicates for each isolate. The spectra shape can be checked after their upload: a panel shows each uploaded spectrum for quick visual inspection (intensity checking) and a table highlights the presence (if any) of empty spectra. For identification, the tool is based on a 2-step process. The first step differentiates isolates at the species complex level using two sets of biomarkers, each set corresponding to reference biomarkers for (i) the Klebsiella pneumoniae species complex (29 biomarkers, Table S1) or (ii) the Klebsiella oxytoca species complex (57 biomarkers, Table S1). This step can be performed, based on user choice, with one of two alternative algorithms for peak detection. The first one [using the R package MALDIquant (Gibb and Strimmer, 2012)] is set by default and runs faster. The second one [using the MassSpecWavelet package (Du et al., 2006)] generally detects more significant peaks but is slower. Species complex identification is based on the ratio of common peaks between the sample peak list and a reference peak list, to the number of peaks in the corresponding reference (this ratio is called "Identification Value"). The input sample is assigned to the species complex with the highest Identification Value. The second step is to identify the specific phylogroup (corresponding to a species or subspecies) within that complex. The raw spectrum is processed again by the method known as Multi Peak Shift Typing (MPST) (Bridel et al., 2020), but this time using the set of biomarkers corresponding to the assigned species complex, in order to recalibrate m/z values. The biomarkers used for the two MPST schemes (one for each species complex, see Tables S2 to S5) are based on previous work (Rodrigues et al., 2018;Merla et al., 2019). Briefly, an MPST scheme is analogous to a multilocus sequence typing (MLST) scheme where an allele type (AT) corresponds to an isoform (IF) for one protein (each IF is defined by a specific massor a m/z value on the spectrum) and a MALDI-Type (MT) corresponds to a sequence type (ST). Each MT was observed within a single phylogroup. The tool attempts to define the IF for each biomarker and to attribute the spectrum to the phylogroup corresponding to the relevant MT. The only exception to this process was introduced for the discrimination of K. oxytoca phylogroups Ko1, Ko4 and Ko6 (K. oxytoca, K. pasteurii and K. grimontii, respectively). These three species form a phylogenetic branch of closely related species within the K. oxytoca species complex (Merla et al., 2019), and their spectra are very similar such that they share the same MT. In order to distinguish among these three phylogroups, we added 3 previously described biomarkers (Merla et al., 2019) for which the tool checks for presence/absence within the spectra (i.e. these 3 biomarkers do not correspond to peak shifts). If any of the above biomarkers (peak shifts or presence/absence) are not found, automatic identification will not be possible, and the result is reported to the user as NT (not-typable). In this case, manual analysis or data re-acquisition are needed. To help resolve these cases and allow users to further explore the data, we included a feature to interactively explore the spectra. The plot window shows the sample spectra overlaid on the reference spectra (1 to 3 spectra for each phylogroup). Predefined windows (one for each biomarker) allow the user to directly investigate specific spectral regions. This feature could also enable the detection of new isoforms or biomarkers. It also contains a customizable option that allows the user to select any region of the spectra. Examples of these features are depicted in Figures S1 to S3. Website is available at http:// hub05.hosting.pasteur.fr/sbridel/klebsiellatyper/ (or https://maldityper.pasteur.fr prepared using different experimental conditions (two media: LB and Columbia agar plus 5% sheep blood; each with both extraction and direct application in the MALDI-TOF target) also yielded correct complex assignments for all isolates, albeit with lower confidence (Table S1). Regarding phylogroup assignment, using LB-extraction 90% of samples were correctly identified at phylogroup level (Table S7), while the other sample preparation methods yielded better results overall (Table S8). To validate robustness of the tool to instrument and user variation, we tested an additional dataset comprising 248 KpSC isolates prepared using the LBextraction method (176 isolates with a single spectra, 26 in duplicate and 68 in triplicate); these spectra were generated on a different instrument (Bruker MALDI Biotyper microflex LT-SH) in a different laboratory (Melbourne, Australia). 244/248 isolates (98%) were assigned to the correct species complex, and 219/244 (90%) were correctly identified at the phylogroup level (84% to 100% for each phylogroup, see Table S9). K. oxytoca species complex (KoSC) assignments were tested using 28 isolates prepared using LB-extraction and triplicate spectra generated on the Bruker Microflex LT mass spectrometer instrument (Paris). All isolates were correctly assigned to complex level and most (89%) were assigned to the correct MALDI-Type (Tables S10 and S11). Discrimination of MALDI-Type MT1 (n=15, 54%) into phylogroups Ko1, Ko4 and Ko6 was poor (one isolate correctly assigned as Ko1) because the three additional biomarkers intended to discriminate between these phylogroups were captured only rarely (Table S6). -Conclusions We developed the Klebsiella MALDI TypeR web tool for identification of Klebsiella species based on MALDI-TOF MS. Our tool allows the fast analysis of single or multiple spectra, including their visualization. We show that MALDI-TOF MS leads to accurate identification for Klebsiella phylogroups of the KpSC, the major Klebsiella species complex found in clinical samples. Klebsiella MALDI TypeR therefore represents an important resource for the identification of the KpSC members, which are currently difficult to identify in routine microbiology practice. For the KoSC, we found promising data, but further validation based on a larger dataset would be required. The biomarker approach may not be suitable for accurate discrimination of Ko1, Ko4 and Ko6. In the future, the website may be updated to integrate additional methods for identification and typing, such as supervised learning.
2020-10-17T13:12:45.223Z
2020-10-13T00:00:00.000
{ "year": 2020, "sha1": "71dccff25df5cc1cc985d4cca78800354d6ccb66", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/10/13/2020.10.13.337162.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "71dccff25df5cc1cc985d4cca78800354d6ccb66", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
30955241
pes2o/s2orc
v3-fos-license
Dynamic Analytical Initialization Method for Spacecraft Attitude Estimators This paper proposes a dynamic analytical initialization method for spacecraft attitude estimators. In the proposed method, the desired attitude matrix is decomposed into two parts: one is the constant attitude matrix at the very start and the other encodes the attitude changes of the body frame from its initial state. The latter one can be calculated recursively using the gyroscope outputs and the constant attitude matrix can be determined using constructed vector observations at different time. Compared with traditional initialization methods, the proposed method does not necessitate the spacecraft being static or more than two non-collinear vector observations at the same time. Therefore, the proposed method can promote increased spacecraft autonomy by autonomous initialization of attitude estimators. The effectiveness and prospect of the proposed method in spacecraft attitude estimation applications have been validated through numerical simulations. In the proposed method, the In the proposed method, the In the proposed method, the In the proposed method, the desired attitude matrix is d desired attitude matrix is d desired attitude matrix is d desired attitude matrix is de e e ecomposed into two parts: one is the constant attitude composed into two parts: one is the constant attitude composed into two parts: one is the constant attitude composed into two parts: one is the constant attitude matrix at the very start and the other matrix at the very start and the other matrix at the very start and the other matrix at the very start and the other encodes the attitude changes of the body frame from its initial state. The encodes the attitude changes of the body frame from its initial state. The encodes the attitude changes of the body frame from its initial state. The encodes the attitude changes of the body frame from its initial state. The latter one can be calculated recursively using the gyrosco latter one can be calculated recursively using the gyrosco latter one can be calculated recursively using the gyrosco latter one can be calculated recursively using the gyroscope ou pe ou pe ou pe out t t tputs and the constant attitude matrix can be puts and the constant attitude matrix can be puts and the constant attitude matrix can be puts and the constant attitude matrix can be determined using constructed vector observations at different time. Compared with traditional in determined using constructed vector observations at different time. Compared with traditional in determined using constructed vector observations at different time. Compared with traditional in determined using constructed vector observations at different time. Compared with traditional ini i i itialization tialization tialization tialization methods, the proposed method does not necessitate the spacecraft being static or more than two n methods, the proposed method does not necessitate the spacecraft being static or more than two n methods, the proposed method does not necessitate the spacecraft being static or more than two n methods, the proposed method does not necessitate the spacecraft being static or more than two non on on on----collinear collinear collinear collinear vector obse vector obse vector obse vector obser r r rvations at the same time. Therefore, the proposed method can promote increased spac vations at the same time. Therefore, the proposed method can promote increased spac vations at the same time. Therefore, the proposed method can promote increased spac vations at the same time. while attitude estimation algorithms can retain information from a series of measurements taken over time. Generally, attitude determination algorithms are based on solutions to Wahba's problem and attitude estimation algorithms are filtering based approaches. Compared with attitude determination, attitude estimation can make full used of the measurement information and can determine the dynamic attitude facilely. Moreover, attitude estimation can determine parameters other than the attitude, say gyro bias, resulting in a more accurate performance. In this respect, attitude estimation is more preferred in modern spacecraft missions. The dynamic model of the spacecraft attitude estimation is virtually a nonlinear model, especially the vector observations based measurement model, which necessitates the nonlinear filtering algorithms. Among these filtering algorithms, the quaternion based extended Kalman filter referred as multiplicative extended Kalman filter (MEKF) is the most celebrated choice for the great majority of applications [3][4][5][6][7]. However, the MEKF may face difficulty when the dynamical models are highly nonlinear or/and a good a priori state information can not be obtained, which promotes the development of advanced nonlinear attitude estimators [8][9][10][11][12][13][14]. The performance improvement of these advanced nonlinear attitude estimators is at the cost of increasing complexity and computational burden. Meanwhile, these estimators may be very difficult to tune and their stability has not been proven. Taking the unscented quaternion estimator (USQUE) for example [8], the initial attitude covariance setting has a great impact on the filtering performance, which is relative to the initial attitude estimate error that we can not know. A conservative method is to set the attitude covariance to be very large, which may result in a very slowly convergent speed and large steady-state error. For the special spacecraft attitude estimation problem using vector observations, the corresponding nonlinearities of the model are determinate. In this respect, the superiority of the advanced nonlinear attitude estimators over MEKF lies only on their capacity to handle the large initial estimate error. If the initial attitude information can be obtained to certain precision, the MEKF is still the most preferred choice for real-time spacecraft attitude estimation. These facts represent the main motivation of this paper, which is devoted to proposing a novel initialization method for the MEKF. The advanced nonlinear attitude estimators are not preferred to initialize the MEKF, since their covariance initialization is still a cumbersome problem. The traditional attitude determination methods can not handle the dynamic attitude estimation owing to their memoryless characteristic. In the proposed initializing method, the spacecraft attitude is decomposed into two parts based on introducing a new inertial fixed frame. The first one encodes the attitude changes of the body frame from its initial state, which can be derived through attitude calculation using the gyro measurements with the naturally known initial value. The other one encodes the constant attitude between the body frame and inertial frame at the very start of one mission. This constant attitude can be determined based on the constructed new vector observations. Through such attitude decomposition, the heart of the attitude estimation has been transformed into determining the constant attitude using vector observations at different time. That is to say, the attitude determination methods can be used to determine the dynamic attitude using the series of measurements taken over time. The proposed procedure makes the attitude determination methods be with memory. The resulting attitude determination results within a short time period are accurate enough to guarantee the validity of the linearization in the MEKF. Therefore, autonomous initialization of attitude estimators can be expected by the proposed methodology. The attitude decomposition method is enlightened by our previously studied initial alignment methods for Strapdown inertial navigation system [15][16][17][18][19]. Hopefully, the investigated method can provide a new way of thought for the spacecraft attitude estimators initialization problem. II. SPACECRAFT ATTITUDE ESTIMATION MODEL The discrete-time attitude kinematics model is given by [2] ( ) i k q denotes the attitude quaternion from the inertial frame i to the body frame b . t ∆ is the sampling interval in the gyro. 3 3 I × is the identity matrix with denoting dimension. 1 k ω − is the angular rate of the spacecraft and can be derived from the gyro measurement as ɶ is the measurement of the gyro. , 1 v k η − is the Gaussian white noise process. 1 k β − is the gyro bias and is assumed to be constant, that is The vector observation model for attitude estimation is given by III. NOVEL INITIALIZATION METHOD Generally, at the beginning of one mission, we can not obtain the precise initial attitude and gyro bias. So the attitude should be calculated recursively using (1) with only a guess of the initial value. The corresponding calculation error can be estimated based on the vector observation model (4) and used to refine the calculated attitude. If the gyro bias is not considered provisionally, the attitude error during its recursive calculation is mainly caused by its initial error. With this consideration, we introduce a new inertial frame 0 b by fixing the body frame b at the start-up in the inertial space. Then the attitude matrix can be decomposed into two parts as It is clearly that ( ) q is given by It is shown that the input angular rate for calculation of q can be calculated recursively by (6) using the gyro measurements without any initial error. After the aforementioned attitude matrix decomposition, the heart of estimation of , Multiplying ( ) Given the gyro measurements, Eq. (9) is a typical attitude determination problem using vector observations and its Wahba's problem 6 formulation can be given by where M is the number of used vector observations in the initialization process. Many existing algorithms can be used directly to address this problem [11], such as the Davenport's q method used in this paper. After attitude 0 b i q has been determined, the spacecraft attitude can be readily obtained through (5). Determine the following two matrices The explicit procedure of the proposed initialization method is summarized in Algorith Algorith Algorith Algorithm 1 m 1 m 1 m 1. Step 1: Set 1 k k = + . Step 3: Construct vector observations Step 4: Construct k b + and k r − according to (11). Step 5: Step 6: Determine 0 b i q by calculating the normalized eigenvector of k K belonging to the smallest eigenvalue. Step 7: Obtain the attitude matrix at current time Step 8: Go to Step 1 until the end of the initialization period. method. Therefore, if the proposed method is used all through the mission, the resulting attitude precision may be not so satisfactory. Actually, the gyro bias is the main error source for the proposed method and it will be cumulated into the determined attitude and therefore, there will be a slowly climbing trend in the attitude determination result. This is just another reason why the proposed method is used within a short time period at the start of the mission. Rem Rem Rem Remark 3: ark 3: ark 3: ark 3: If the spacecraft is static or there are more than two non-collinear vector observations at the same time, the attitude determination methods can also be used to initialize the attitude estimators. However, the aforementioned requirements restrict the autonomous initialization under arbitrary conditions. In contrast, the proposed method can be effective even when the spacecraft is dynamic or there is only one vector observation at one time. In this respect, the proposed method is more versatile for the initialization of the attitude estimators. 8 IV. SIMULATION EXAMPLE In this section, the performance of the proposed attitude estimation method is evaluated through several test cases using the simulation example 7.2 of [1] (also the example 6.2 of [2]). A 90-min simulation run is shown. The simulation here is a little different with that in [1]. In [1], the star tracker can sense up to 10 stars, while in this simulation example only one star's measurement is used in each time instant. This makes the problem more difficult to address using conventional attitude determination methods. In the proposed attitude estimation methodology, denoted as "Optimal+MEKF", the Davenport's q method is applied for the first 5 minutes using our constructed observations, followed by the MEKF for the remaining time. The well-known attitude estimators, i.e. MEKF and USQUE are also evaluated for comparison. Moreover, the method that making use of only the Davenport's q method based on our constructed observations is also evaluated and it is denoted as "Optimal" here. In the first case, the initial attitude estimate error is set as [ ] 10 10 30 deg . Actually, this attitude estimate error setting is only for the MEKF and USQUE. For the "Optimal+MEKF" methodology, any prior information is meaningless. The gyro bias is set to 0.1 deg/h for each axis and the initial bias estimate is set to 0 for each axis. The initial covariance for the attitude error is set to ( ) 2 0.1deg for the "Optimal+MEKF". As is known, this initial covariance is firstly used after the 5 minutes' Davenport's q method being performed. The initial covariance for the attitude error is set to ( ) 2 10 deg for both the MEKF and USQUE. The initial covariance for the gyro bias is set to ( ) 9 USQUE is almost 5 times of that of the MEKF. For the "Optimal+MEKF", after 5 minutes' attitude determination procedure, the attitude error can be reduced to 0.08 deg which is small enough for the validity of the linearization in the MEKF. So the following MEKF achieves a very accurate estimation result, nearly the same as the USQUE but with much less computation burden. It is shown that there is a slowly climbing trend in the attitude determination result by "Optimal", which is owed to the existence of gyro bias that has not been taken into account in the algorithm. It is also shown that the attitude determination error can be reduced to within 1 degree almost instantaneously. Such fast convergent speed and accurate performance make this method very suitable for initializing the attitude estimator, more specifically, the MEKF. In the second case, the initial attitude estimate error is set as [ ] Fig. 2. In this case, the performance of the MEKF has been much degraded compared with the first case. The performance of the USQUE has also been degraded a little. In this case, the "Optimal+MEKF" methodology has outperformed the USQUE with much less computational burden. It can be deduced that the performance of the USQUE can be further degraded when the initial attitude estimate error becomes larger. That is to say the USQUE can not handle arbitrary large initial attitude estimate error. In contrast, the proposed "Opti-mal+MEKF" can be carried out with nothing prior attitude estimate information. So the "Optimal+MEKF" will be more celebrated in realistic application. It has been pointed out that the performance of the constructed attitude determination method relies heavily on the precision of the gyro bias. In the last two cases, the gyro bias is set to 0.1 deg/h for each axis, which can be viewed as a very accurate level. The resulting attitude determination performance is therefore very accurate, as shown in Fig. 1 and 2. In the third case, the gyro bias is set to 10 deg/h for each axis and the initial bias estimate is set to 0 for each axis. The initial covariance for the gyro bias is set to ( ) timal+MEKF", MEKF and USQUE. The initial attitude estimate error is set as [ ] 10 10 30 deg and the other settings are all the same with that in the first case. The averaged norm of total attitude estimation error over 50 Monte Carlo runs for this case is shown in Fig. 3. For the "Optimal+MEKF", after 5 minutes' attitude determination procedure, the attitude error is reduced to 2.95 deg which is much larger than that in the last two cases. This is because that in this case a much low grade gyro has been used. However, the attitude determination error by the proposed method is also small enough for the validity of the linearization in the MEKF as the following MEKF still has an accurate estimate performance. The performance of the MEKF and USQUE is similar with the corresponding one in the first case, which indicates that the gyro bias can be well estimated by the attitude estimators, no matter how large it is. Fig. 4. It is shown that the performance of both the MEKF and USQUE has been much degraded. The superiority of "Optimal+MEKF" over USQUE becomes more obvious. Since the degree of the gyro bias has a significant effect on the performance of the proposed method, the performance of the proposed method with different degrees of the gyro bias has also been evaluated. The corresponding attitude determination error is shown in Fig. 5. It is clearly shown that when the gyro bias becomes larger, the determined attitude is degraded correspondingly. Specifically, when the degree of the gyro bias exceeds 10deg/h, the proposed method is not recommended for initializing the MEKF. For such cases however, it can be used to initialize the advanced nonlinear attitude estimators, such as the USQUE. From Fig.5, it can also be seen that even when the gyro bias is small enough, such as 0.01deg/h, the determined attitude error is still larger than that of the MEKF with appropriate initialization. The reasons have been discussed in R R R Re e e emark 2 mark 2 mark 2 mark 2, that is, the proposed method is virtually an analytical method and the noise inherent in the vector observations can not be well handled. In this respect, the proposed method is not recommended as the attitude determination method through the whole mission even when the gyro bias is very small. In this paper, a novel initialization method is proposed for spacecraft attitude estimators. In the proposed method, the attitude is decomposed into two parts: one encodes the attitude changes of the body frame and the other encodes the constant attitude between the body frame and inertial frame at the very start of one mission. The constant attitude can be determined using the constructed vector observations at different time, which makes the proposed attitude determination method be with memory. Simulation results indicated that the proposed method can determine the attitude to a quite precise degree within only a short time period. With the initialized value provided by the proposed method, the MEKF can estimate the attitude quite accurate. By the proposed initialized method, some complex nonlinear attitude estimators are no longer needed. Since the proposed method needs nothing prior information, autonomous initialization of attitude estimators can be expected.
2017-06-21T10:05:39.000Z
2017-06-21T00:00:00.000
{ "year": 2017, "sha1": "62948188961d5b6e94f028efd75c6ec4df4511d6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "62948188961d5b6e94f028efd75c6ec4df4511d6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
249890008
pes2o/s2orc
v3-fos-license
Resonance crossing of a charged body in a magnetized Kerr background: an analogue of extreme mass ratio inspiral We investigate resonance crossings of a charged body moving around a Kerr black hole immersed in an external homogeneous magnetic field. This system can serve as an electromagnetic analogue of a weakly non-integrable extreme mass ratio inspiral (EMRI). In particular, the presence of the magnetic field renders the conservative part of the system non-integrable in the Liouville sense, while the electromagnetic self-force causes the charged body to inspiral. By studying the system without the self-force, we show the existence of an approximate Carter-like constant and discuss how resonances grow as a function of the perturbation parameter. Then, we apply the electromagnetic self-force to investigate crossings of these resonances during an inspiral. Averaging the energy and angular momentum losses during crossings allows us to employ an adiabatic approximation for them. We demonstrate that such adiabatic approximation provides results qualitatively equivalent to the instantaneous self-force evolution, which indicates that the adiabatic approximation may describe the resonance crossing with sufficiently accuracy in EMRIs. I. INTRODUCTION Dynamical systems under non-integrable perturbation exhibit various extraordinary features compared to the unperturbed ones [1]. First of all, the perturbation may trigger chaotic motions in some regions of the phase space. Depending on the strength of the perturbation, deterministic chaos may completely dominate the dynamics. However, even in the slightly perturbed systems with negligible amount of chaos, we observe dynamically relevant non-integrable effects near resonances, i.e. in parts of the phase space, where the characteristic frequencies of the system are commensurate. In fact, according to the Kolmogorov-Arnold-Moser (KAM) theorem [2], parts of the phase space of a weakly perturbed system that are far enough from resonances remain basically unaffected by the imposed perturbation. On the other hand, the dynamics in the vicinity of resonances considerably differs which may affect measurable properties of the system. Motivated by the latter fact, there were studies trying to provide some insight into the dynamics of an Extreme Mass Ratio Inspiral (EMRI) crossing the resonance [3][4][5][6][7][8]. EMRIs represent a key observational target for the future mission of Laser Interferometer Space Antenna (LISA) [9]. They are composed of a supermassive primary black hole and a much lighter secondary compact object (black hole or neutron star) inspiralling into the primary. The mass ratio η = m/M between the mass m of the secondary to the mass M of the primary is smaller than 10 −4 . The smallness of η allows us to use it as a * mukherjee@asu.cas.cz † kopacek@asu.cas.cz ‡ gglukes@gmail.com perturbation parameter when expanding the background spacetime of this binary system in orders of η to calculate the gravitational self-force (GSF) [10,11]. Actually, the dissipative part of this self-force causes the secondary to inspiral towards the primary as it radiates away energy and angular momentum in the form of gravitational waves. The full first order self-force for a non-spinning secondary moving on a generic orbit has been obtained relatively recently [11,12], but using the full self-force is computationally expensive. The main cause is that the inspiralling body revolves around the primary body for a number of cycles inversely proportional to the mass ratio of the system (η −1 ). Since the mass ratio in an EMRI is small, the number of cycles becomes very large making the numeric computations highly demanding. In order to decrease the computational demands, the gravitational self-force may be approximated in some way. In particular, we may employ an adiabatic approximation [13], which takes into account just the averaged dissipative part of the first order self-force, while more advanced approximations are also available [14,15]. Actually, Ref. [14] attempts to tackle the issue that during the many EMRI cycles, it is almost certain that the inspiralling body will cross resonances, i.e. some of the orbital characteristic frequencies of the secondary will become commensurate [3,[16][17][18][19]. Not all of these crossings are equally important. Only those with a small denominator, like 1 : 2, 2 : 3, are expected to have a significant impact on the inspiral [20]. It has not yet been clarified up to which value of the denominator we should expect this impact, but we speculate that it should be a value of the order of 10. Most of the models we use to approximate an EMRI are problematic at the resonances [20], however, as was already mentioned there is an ongoing effort to overcome this obstacle [7,14] and this work is part of this effort. Mutual effects of the electromagnetic and the gravitational self-force in the dynamics of a charged compact body has recently been studied in the context of EMRIs [21]. That study has shown that besides the interaction terms previously derived in [22], additional perturbative terms linear in the metric perturbation are generated and these terms may become relevant in some astrophysical situations. However, here we adopt a different approach and we do not consider combined effects of the both interactions. In our scenario, the electromagnetic self-force exerted on the charged body in a magnetized spacetime serves as an analogue model of a gravitational self-force [23] which we, however, do not consider explicitly. Although, the evolution of an EMRI system and the emission of the gravitational radiation are actually governed by the gravitational self-force, here we deal with the aspects of the dynamics correlated with the resonances and their crossing due to dissipation within the electromagnetic analogue framework, which allows us to avoid the intricate formalism of the gravitational self-force [24,25]. The primary motivation of this work is to study the system with the electromagnetic self-force as an analogue of an EMRI, however, the fact that we study dynamics of an electrically charged body in the vicinity of a magnetized rotating black hole, makes the results of our analysis relevant also to other astrophysical applications. Hence, let us discuss a little bit the framework of this model. We consider an asymptotically uniform magnetic field aligned with the rotation axis of the black hole described by a vacuum model derived by Wald in [26]. Albeit asymptotically uniform, the field becomes largely deformed due to frame-dragging and other relativistic effects if we get closer to the horizon of the black hole [27,28]. Although the employed weak-field approximation does not take into account the effect of the field onto the curvature of the spacetime, the geometric effects of the spinning black hole onto the topology of the electromagnetic field are described completely by the given model. On the other hand, the effect of the charged matter onto the electromagnetic field is neglected in this framework. This might appear as an issue especially within the inner parts of the accretion disk where the presence of charges and currents significantly contributes to the field and, in particular, small-scale magnetic fields may be induced here due to turbulence driven by the magnetorotational instability [29]. Turbulence allows the transport of the angular momentum and, thus, contributes to the viscosity of the disk which is crucial for the accretion process. Nevertheless, for the study of the resonance crossing, the small-scale structure of the magnetic field and full description of the physics of accretion is not relevant. In fact, the organized large-scale field [26] provides an appropriate framework allowing the dissipation of the energy and angular momentum due to radiation losses of the charged test-body [30]. The model of an axisymmetric vacuum magnetosphere of the rotating black hole described by the Wald's solu-tion [26] of the Maxwell's equations on the Kerr background has been employed in various context. For instance, it has been shown that this model (unlike the pure Kerr or Kerr-Newman background) allows stable off-equatorial circular orbits [31], which are astrophysically relevant as a basic model to study the dynamics of diluted plasma in an accretion disk coronae. Moreover, the magnetic field acts as a non-integrable perturbation triggering (deterministic) chaos in some regions of the phase space [32] and chaotic dynamics may contribute to launching the outflow of escaping jet-like trajectories [33][34][35]. Hence, although, the model of vacuum magnetosphere does not attempt to provide a complete description of the field topology of an accreting black hole, it represents an useful approximation which allows the study of various astrophysically relevant processes. The rest of the article is organized as follows. Sec. II introduces the system of a charged body moving in a Kerr background with a test external magnetic field without self-force. Sec. III discusses how a resonances grow in this system. Sec. IV studies the crossings of resonances when the dissipation due to instantaneous electromagnetic selfforce is imposed, while Sec. V compares the instantaneous self-force results with the adiabatic ones. Finally, Sec. VI examines our parameter choices and Sec. VII discusses our main findings. II. MOTION OF A CHARGED BODY IN AN EXTERNAL ELECTROMAGNETIC FIELD In this section, we discuss the motion of a charged body around a magnetized Kerr black hole without the electromagnetic self-force. We start with the equations of motion and the conserved quantities, and then we discuss the integrability of the system and the existence of a Carter-like constant. A. Equations of motion The equations of motion for a charged body in a curved background read where, 'D' denotes a covariant derivative, τ is the proper time,q = q/m denotes the specific charge of a body with rest mass m, U is the 4-velocity, and F µν is the electromagnetic field tensor. The equations of motion (1) actually represent a set of second order differential equations in Boyer-Lindquist coordinates x µ = (t, r, θ, φ). However, one can employ the Hamiltonian formalism to obtain equivalent set of first order equations for the canonical coordinates (x µ , π ν ), where π ν = (π t , π r , π θ , π φ ) are components of the canonical four-momentum. The Hamiltonian of a body with electric charge q and rest mass m in a vector potential field A µ can be defined as [36]: where g µν is the metric of the background spacetime. The vector potential A µ is related to the electromagnetic field tensor as The equations of motion are: where λ ≡ τ /m is a dimensionless affine parameter. By employing the first equation we obtain the kinematical four-momentum P µ = π µ −qA µ , and the conserved value of the Hamiltonian is therefore H = −m 2 /2. B. Kerr spacetime Within this study we consider a fixed spacetime background of a rotating black hole. The line element of the Kerr metric in Boyer-Lindquist coordinates reads where where M is the mass of the black hole and a is the Kerr spin parameter. C. Asymptotically uniform magnetic field We employ a simple model of a vacuum magnetosphere consisting of an asymptotically uniform magnetic field aligned with the spin of the black hole. The relevant test-field solution of Maxwell's equations on the Kerr background may be derived exploiting the fact that in this case the Killing vectors themselves, as well as their linear combinations, solve the Maxwell's equation [26]. In particular, the vector potential A µ = (A t , 0, 0, A φ ) of the solution corresponding to the magnetic field of the asymptotic strength B 0 may be expressed in terms of covariant components of the Kerr metric tensor g µν as follows: Since the vector potential of a stationary axisymmetric field does not depend on the coordinates t and φ, we obtain the following set of non-zero components of the electromagnetic field tensor: where we setx ≡ sin θ andỹ ≡ cos θ. We consider the model of a weakly magnetized black hole, in which the contribution of the electromagnetic field to the stress-energy tensor T µν is neglected and the field, thus, does not affect the spacetime geometry nor the motion of electrically neutral bodies. Using such test-field approximation is justified in astrophysical conditions, while the relevant field intensities encountered in cosmic environments are too low to affect the geometry significantly even in extreme cases of neutron stars and magnetars [37]. Even if the field given by Eq. (7) is asymptotically uniform, in the vicinity of a rotating black hole the field structure becomes distorted by the frame-dragging and other effects of strong gravity [e.g., 28]. In particular, the horizon of the Kerr black hole tends to expel the magnetic field as the spin increases. With the extreme rotation (a = M ), the expulsion becomes complete and the invariant magnetic flux through each hemisphere of the horizon drops to zero in what is known as black hole Meissner effect [27,38,39]. While this effect only operates in an axisymmetric field [40], other remarkable effects like the emergence of magnetic null points may arise in nonaxisymmetric vacuum magnetospheres of black holes or neutron stars [41,42]. Nevertheless, in the present paper we restrict the discussion to the perfectly axisymmetric model of Kerr black hole immersed into a weak asymptotically uniform magnetic field aligned with the spin axis described by Eq. (7). D. Conserved quantities In this section, we consider the system without the electromagnetic self-force. Therefore, the value of the Hamiltonian given by Eq. (2) remains conserved as H = −m 2 /2. Due to the stationarity and the axisymmetry of the system, the relevant components of the canonical four-momentum π t and π φ are also conserved and define the integrals E (energy) and L z (axial component of the angular momentum) as follows: −E ≡ π t = P t + qA t , L z ≡ π φ = P φ + qA φ . (12) In addition to the timelike and spacelike Killing vectors, the Kerr spacetime is also endowed with a Killing tensor. Due to this, once we switch off the magnetic field (or consider electrically neutral bodies which are not affected by the weak field), we find the fourth conserved quantity, namely the Carter constant [43]. The existence of four independent and in involution integrals of motion in a system of four degrees of freedom assures full Liouville integrability of the system [1]. Below we discuss the existence of a Carter-like constant in the presence of the external magnetic field. In order to investigate that, we employ the Carter's theorem [44,45]. This theorem assumes an axisymmetric spacetime with metric coefficients which may be expressed as: where T tt , T tφ , T φφ , T rr and Σ r are functions that only depend on the radial coordinate r, while Θ tt , Θ θθ , Θ tφ , Θ φφ and Σ θ are functions of θ only. In such spacetime, if we can express a Hamiltonian H c in the form: where H r and W r are functions of r only, and H θ and W θ are functions of θ only, then the following quantity commutes with the Hamiltonian: (15) Given that our work concerns the Kerr background, by comparing the expressions in Eq. (13) with the Kerr metric coefficients, we arrive at: To check if we can write the Hamiltonian (2) in a manner similar to Eq. (14), we first expand it: In the above, the 4-momentum is related with the 4velocity as P µ = mU µ . Note that P r and P θ are arbitrary, while P t and P φ can be obtained from the relations given in Eq. (12): where = qB 0 is a key parameter which scales the electromagnetic interaction and introduces the nonintegrable perturbation to the geodesic motion. Note that as we employ the weak-field solution given by Eq. (7), the charge q and the asymptotic magnetic induction B 0 always couple as qB 0 leaving as the only independent parameter. Having done the above two steps allow us to rewrite the Hamiltonian as follows: where we define F as: The Hamiltonian can be written in a more compact form as follows: where H 0 and H int are defined as: In the above, H 0 corresponds to the Hamiltonian for a Kerr geodesic. Interestingly, the linear order perturbation in is only introducing an additional constant term to this Hamiltonian, as can be seen from Eq. (21). Therefore, if the Hamiltonian H 0 is separable in r and θ, so should be H int . On the other hand, F is a coupling term in r and θ, which renders the total Hamiltonian H not separable in these coordinates. Therefore, to be able to employ Carter's theorem, we drop the term F, which linearizes H in , and work with H int . By using the metric components given in Eq. (13), we can break Eq. (22) into two pieces: and the Hamiltonian given in Eq. (21) now reads where, H 1 = (L z − 2aE). By rewriting the above with H r = H 0 r − r 2 H 1 , and H θ = H 0 θ − a 2 cos 2 θH 1 , we obtain finally Hence, by ignoring the O( 2 ) term in Eq. (22), there is essentially a constant term added to the unperturbed Hamiltonian H 0 , and, therefore, the overall integrability is retained. This is given by the Hamiltonian H int . Below we present the expression for the Carter-like constant for future references. By using Eq. (15) we get We have verified that the Poisson bracket between K and H int identically vanishes, i.e., {K, , as argued before. This serves as a sanity check for the derivation of the Carter-like constant K in Eq. (27). Note that originally, the quantity K−(L z −aE) 2 was described as the Carter's constant in Kerr spacetime [43]. However, we will confine ourselves with the definition (27) throughout the rest of the paper. In addition, we will use the notationẼ = E/m,L z = L z /m, H = H/m 2 andK = K/m 2 , to denote specific energy, momentum, Hamiltonian and Carter-like constant respectively. Moreover, we define˜ = /m =qB 0 . Finally, we should stress that the linearization of the Hamiltonian is only to introduce the concept of Carter-like constant in the present section. For future references in the paper, we will always use the full Hamiltonian H and not H int . Recall that unlike gravity, electromagnetism has both repulsive and attractive nature. In our system, assuming a > 0 and q > 0, the repulsion occurs when the external magnetic field is parallel to L z component of the angular momentum of the body while the force is attractive when they are antiparallel. We discuss both cases until Sec. IV B, however, then we restrict ourselves on the case of attraction since we study this system as an analogue of an EMRI. III. RESONANCE GROWTH In the previous section, we have shown that a system of non-radiating charged particle in a magnetized Kerr background preserves its integrability if the terms O( 2 ) are neglected. Nevertheless, in the rest of paper we consider the full Hamiltonian system (2) and we focus on particular effects, which the non-integrable perturbation has upon resonances. This section, in particular, discusses how a resonance grows as the perturbation increases, which is achieved by measuring the resonance width w. The width w of a resonance is formally defined as the difference between the maximum and the minimum values of the action on the separatrix of a pendulum to which a resonance can be approximated to by a normal form (for more details see [7,46]). Here we adapt an approach applied in [7] and we measure this width from rotation curves obtained by the analysis of Poincaré sections. The rest of this section outlines the employed procedure. The resonance in Poincaré section and rotation curve A resonance occurs when two or more characteristic frequencies of the system are commensurate. In our work, we are interested in resonances between the polar ω θ frequency and the radial frequency ω r . Resonances play an important role when an integrable system, like the geodesic motion on a Kerr background, is perturbed in such way that integrability is lost. According to the KAM theorem [2], the tori of the integrable system that are sufficiently away from a resonance survive the perturbation and are just slightly deformed. On such tori we observe quasiperiodic orbits which are characterized by irrational frequency ratios. On the other hand, the resonant tori in the perturbed system dissolve and only an even number of periodic orbits survive according to the Poincaré-Birkhoff theorem [47]; half of these orbits are stable with secondary islands of stability forming around them and the other half unstable with corresponding asymptotic manifolds stemming from them. These orbits form the so called Birkhoff chain. A way to investigate the resonance growth is to track them on Poincaré sections for different values of , while keeping fixed the energy E, the angular momentum L z and the Kerr parameter a. Practically, in our system a Poincaré section is formed, when we register the momentum π r and the radial coordinate r in the Hamiltonian flow crossings of the equatorial plane with a specific orientation, e.g. when an orbit crosses the plane with π θ > 0. Spotting a resonance on a Poincaré section just by inspecting it is often quite difficult, see, e.g., the top panel of Fig. 1. To achieve it in a two degrees of freedom system, a useful tool is the rotation number [48,49], which provides the ratio of the characteristic frequencies. A rotation number can be calculated from a Poincaré section. The first step is to identify a fixed point x c on the Poincaré section, around which closed curves are nested. This point is often called the centre of the main island of stability and the curves correspond to cuts through tori, for which the frequency ratio is an irrational number. In the top panel of Fig. 1 the position of the fixed point is marked by a black dot. In the next step, rotation angles between successive intersections x i with the section with respect to x c are calculated as The rotation number is then defined as For finite n the accuracy of the rotation number is of the order of 1/n. For an integrable non-degenerate Hamiltonian system, the rotation number changes monotonically for initial conditions getting radially further away from x c . The respective curve is called rotation curve. When the integrability is broken, then at the resonances the rotation curve fluctuates randomly, when it is calculated on a chaotic layer, or provides a plateau, when it is calculated on a secondary island of stability. The bottom panel of Fig. 1 shows how the rotation curve looks like when we scan the section depicted in the top panel along π r = 0. In both panels of Fig. 1, we do not see direct signs of a resonance. However, the rotation curve indicates where we have to look to find one. For example, to find the 1 : 3 resonance, one has to look between the first two initial conditions from the left of Fig. 1 or beyond the first initial condition from the right. Doing the latter provides a detail of the Poincaré section shown in the left panel of Fig. 2. The inset of this panel provides all the three expected islands, while the main panel focuses just on one of the three islands of stability. The respective rotation curve is depicted in the right panel of Fig. 2, in which one can spot easily the characteristic plateau of the resonance. The length of this plateau measured along the radial coordinate r provides an adequately good measurement of the width of the resonance w. Growth of resonance As was already mentioned, resonances in dynamical systems can be approximated locally by the dynamics of a pendulum [46]. Using the above fact one can find that the square of the width of a resonance is proportional to the perturbation parameter [7,46,50]. By plotting the width of a resonance as a function of as we do in Fig. 3, we can correlate the perturbation parameter with . We investigate two main resonances (1 : 2 and 2 : 3) and the width of the resonances for each is determined by the length of each plateau on the corresponding rotation curve [7]. In Fig. 3 we fitted the data points for each resonance with the curve, For the two under study 1 : 3 and 1 : 2 resonance, we found that B = 0.997577, and B = 0.992058 respectively. Given the slope in both cases is 1 (within a numerical error of 1%), we deduce that the perturbation parameter driving the system away from integrability is proportional to 2 . In other words, the system is integrable up to O( ). This confirms our finding in Sec. II D, where we obtained that the Carter-like constant is valid up to O( ). It is interesting to note that in this feature the system is similar to the case of a spinning body moving in a Kerr black hole background. Namely, there is a Carter-like constant valid up to linear order in the spin of the secondary [51][52][53][54][55], while the perturbation parameter driving the system to non-integrability appears to be proportional to the square of the spin [50]. Fig. 1 focusing on an island of stability of the 1 : 3 resonance, while the right panel shows the respective rotation curve with the characteristic plateau. The inset plot in the left panel represents the Poincaré section for a particular initial condition. For the radial distance, we choose from 40.026M to 40.674M with a step size of 0.027M . Note that in the right curve for the rotation number, we have included all these data points. However, for the illustration of the resonance in the left panel, we have only shown a subset of the entire data set. This is only to make the plots more reader friendly. To indicate the resonance width w, we have drawn two lines parallel to ν ϑ -axis at the endpoints of the plateau. The distance between these two endpoints along the r-axis corresponds to the width of the resonance w. IV. EFFECTS OF THE ELECTROMAGNETIC SELF-FORCE The motion of accelerated charged body constitutes an interesting theoretical problem and has a long-standing history [56]. Starting with the seminal works of Lorentz, Abraham and Poincaré in the Newtonian case [57], the contributions from Dirac [58], Landau [59], Dewitt, and Brehme for the relativistic domain have made remarkable expansion of the field [60]. Recent times have also witnessed a significant growth of interest in this topic [30,61]. For an excellent review we refer to [23]. In this section, we introduce the respective equations of motion in a ready to use format, and we discuss the resonant crossing effects for different initial conditions. A. Equations of motion with the self-force Before delving into the relativistic case, let us first discuss the Newtonian counterpart. In this limit, the equations of motion are given by Abraham-Lorentz equation [62] where V is the velocity, F ext is the external Lorentz force, and t is the time. Note that the second term on the right side captures the effect of the self-force of the moving charge, and it is provided by a derivative one order higher than the left side. This would lead to runaway solutions, which are physically inconsistent. One easy way to appreciate this is to switch off the external force, i.e., F ext = 0, and we obtain V ∼ exp[3mt/(2q 2 )]. This diverges as t approaches infinity, and leads to a unphysical system. In order to avoid this pathology, one typically adopt the approach introduced by Landau and Lifshitz in [59], that is known as an "order reduced" formalism. Within this approximation, the above expression can be written as: In the case of Lorentz force, we have where E ext and B ext are the electric and magnetic field respectively. If we assume that these fields are indepen-dent of time, we arrive at Once the external fields are known, we can obtain the final trajectory of the body. Therefore, this approach gives a self-consistent way to deal with the electromagnetic self-force of a charged body. The relativistic correction to the Abraham-Lorentz equation was first introduced by Dirac and it is known as Abraham-Lorentz-Dirac equation [58] where, F µ ext is the external Lorentz force, and given as F µ ext = qF µ ν U ν , and a µ is the acceleration vector. By following the "order reduced" approach discussed in the Newtonian case, we can write With the above substitution, we obtain Finally, we can rewrite the above expression in a more reader friendly way as follows: For our future reference, we will call the prefactor of the second term in Eq. (35), i.e., 2q 2 /3m as radiation parameter k, which actually scales the electromagnetic self-force felt by the charged body. In the case of EMRI driven by GSF, the analogous parameter would be the mass ratio η = m/M . In our system, the both parameters are formally related as follows: Note that even if we fix the value of k (with respect to M ), the mass ratio still depends on the arbitrary choice of specific charge. For instance, in our subsequent numerical examples we employ the value k = 10 −3 M , for which η ∼ 10 −1 (q) −2 . However, one has to keep in mind that the GSF and the electromagnetic self-force (ESF) are intrinsically different. Therefore, the mass ratio found from the above formula is not directly relevant for GSF applied in EMRI systems. In the curved spacetime, the self-forced motion of charged body is derived by DeWitt and Brehme [60]: where f µ R is the radiation reaction given by the following expression: with f µν tail being the tail term: +λ (z(τ ), z(τ ))U λ dτ . The above expression contains an integral over the entire past of the body. In the present case, we can simplify Eq. (40) further by considering the advantage of working in vacuum, and set the Ricci tensor to zero. Therefore, the second parenthesis in Eq. (41) vanishes and does not contribute. Moreover, by following the discussion in [30] and [61], we can also neglect the tail term, and finally arrive at: We note that in its gravitational analogue (i.e., in EMRI driven by GSF), the tail term (which corresponds to the effect of the backscattered gravitational radiation) remains relevant. Nevertheless, in the present electromagnetic model, its contribution is negligible as further discussed in Sec. VI, hence, we ignore this term in our calculations. This approximation is sufficient for our study as the key objective of our work is to study resonance crossing in a EMRI analogue model with the help of a slow dissipation introduced by ESF. Note that Eq. (43) matches with the flat spacetime relation given in Eq. (35), except for the fact that the ordinary derivative is now replaced with the covariant derivative. With this in mind, the expression for D 2 U ν /dτ 2 can also be obtained directly from Eq.(37) by switching to covariant derivatives: Unlike the Hamiltonian system discussed in Sec. II where the single parameter˜ is sufficient to describe the interaction with the EM field, here the situation becomes slightly more complicated and two independent parameters are needed. Namely, we have to deal with˜ and k, when the radiation reaction is taken into account. Indeed, by inspecting Eqs. (43) and (44), we observe that although B 0 again couples withq in both terms, the multiplication by the factor k is present only in the second term. Therefore, we need to set both values˜ and k independently. Moreover, by inserting (44) into (43), one notices that the leading perturbation parameter is , which comes from the Lorentz force Eq. (1) and the self-force introduces higher order perturbations, i.e. k and k 2 . This indicates that even if the self-force is taken into account, the width of the resonance is mainly defined by . In all the future references to ESF, we consider the equation of motion given by Eq. (43) and we parametrize the dissipating trajectories by˜ ,k and the initial values ofẼ andL z . B. Resonance crossings From the Hamiltonian system, we are already informed about the existence of resonant islands for different initial conditions. We can place a body on an initial condition right outside a known resonance layer and dissipate the system using the self-force (43). The expectation is that the inspiralling body at a point will hit the resonance and cross it. Such a resonance crossing is shown in Fig. 4. Since the Hamiltonian system is non-integrable, in accordance to the EMRI terminology we call these resonances prolonged [7,50]. The top panel of Fig. 4 shows a stroboscopic depiction of a 1 : 3 resonance crossing on a Poincaré section 1 , which means that we used only every third point in the section's sequence. Using each point of this sequence as an initial condition we can evolve the system without the self-force in order to find the rotation number for each of these points as was also done in [7]. The result is a rotation curve shown in the bottom panel of Fig. 4, where the rotation numbers are plotted with the respect to the proper time. The plateau at 1 : 3 indicates the points corresponding to the crossing of the resonance. With the dissipation turned on, previously constant quantities will be evolving. Fig. 5 shows a typical evolution of the energyẼ, the angular momentumL z , the relative error of the Hamiltonian ∆H/H andK 2 . The energy and the angular momentum follow an almost linear decline, while the Hamiltonian is conserved up to numerical precision. The behavior of the quantityK deserves more attention. Unlike the other three quantities, it is actually not an integral of motion and its value oscillates even without the self-force. Moreover, if the self-force is introduced,K exhibits an abrupt drop during the resonance crossing. Such jumps are quite common at resonance crossings induced by GSF [16,20] and they typically scale as the square root of the parameter which perturbs the system and induces dissipation (i.e., the square root of the mass ratio in the case of GSF). In our model, the perturbation parameter˜ is independent of the self-force effects captured by the parameter k. As was already discussed in Sec. IV, the width of the resonance should be determined mainly by˜ . A reasonable expectation is that the jump inK should be proportional to the width of the resonance. To confirm this, we did some numerical checks. For resonance 1 : 2, with = 10 −3 M −1 , and r i = 41.2M , the jump inK (the change in its value between entering and leaving the resonance) is ∼ 0.08M 2 which is approximately ∼ √˜ M . For resonance 1 : 3, the jump inK for r i = 40.098M and = 10 −3 M −1 is ∼ 0.6M 2 , i.e. an order of magnitude higher than √˜ M . We speculate that this discrepancy is caused by different coefficients entering the proportional relation between the width and the jump. However, this demands a meticulous investigation, which we plan for a future work. V. COMPARISON BETWEEN THE ADIABATIC APPROXIMATION AND FULL SELF-FORCE One of the key objectives of this paper is to compare adiabatic approximation with the instantaneous electromagnetic self-force computations. This may hint us to shape our ideas in the GSF sector as well. As the application of self-force is relatively easier in the electromagnetic case than its gravitational counterpart, we can exploit this advantage. For the adiabatic approximation, we have not followed the traditional approach to obtain the fluxes due to energy and angular momentum, as typically done in literature [13]. Namely, in the traditional adiabatic approach of an EMRI the evolution of the system tracks the slow dissipation of the constants of motion: energy, momentum and Carter constant. Since the geodesic motion in a Kerr background is integrable, one can correlate the values of the constants with the orbital parameters of the body and track the inspiral. In our case, however, the system (unless being linearized in ) is non-integrable and lacking the Carter-like constant. To employ an adiabatic scheme for the non-integrable system and compare it with the full self-force results we do the following. We use the energy and angular momen-tum values obtained from the instantaneous self-force to fit the respective data sets, and obtain the energy and momentum as functions of time. In other words, we can write energy and momentum as follows: where a n and b n are usual expansion coefficients capturing the effects of the self-force, and E(0) and L z (0) are the initial values of energy and momentum respectively. The instantaneous self-force comes with the advantage of adding as many order of correction as we want, and it is only a matter of higher order fitting, i.e. higher N . In an example discussed in the Appendix, we indicate that the order of fitting may affect the final outcome of the resonance crossing. Our numerical investigation showed that the mismatch is marginal if we consider N ≥ 2. In most cases when reproducing the plots, we consider the fitting to be a fourth order polynomial (N = 4). Our adiabatic scheme is similar with those used in [4,6,7,63], the only difference is that those studies used averaged fluxes instead of using the instantaneous selfforce. As in these studies the absence of a Carter-like constant limits our dissipation scheme in using just the energy and the angular momentum losses. By using the fitted function of the energy and momentum as pointed out in Eq. (45), we obtain P t and P φ as a function of time: The self-force corrections are encoded within the time evolution of energy and angular momentum. To obtain the other components namely P r and P θ , we use Eq. (1), which does not include the dissipative effects. The entire premise of using the adiabatic approximation here is to averaged out the instantaneous self-force contribution and ignore some of its components. By comparing the results obtained from the adiabatic evolution with the self-force one, we intend to deduce arguments relevant for the EMRI gravitational counterpart. In Fig. 6, we consider a typical example where the adiabatic evolution is shown. The detailed parameter space and the fitting parameters are given in the caption of the plot. If we ignore the scattered points, we can see that the relative error in the Hamiltonian steadily grows (top panel of Fig. 6) in contrast to the self-force calculation (right top panel of Fig. 5). The latter shows that the Hamiltonian remains conserved up to O(10 −10 ), which implies a numerical precision accuracy. Hence, one of the side-effects of using our scheme for adiabatic approximation is that the mass of the inspiralling body changes with time. The evolution ofK is also affected by the adiabatic approximation as can be seen form the bottom plot of Fig. 6, where it is compared with the instantaneous electromagnetic self-force calculations. In particular, the difference grows in time, and becomes more prominent near the resonance. This might be an artifact of the employed adiabatic approximation, which could be enhanced by the fact that we ignored the evolution ofK. It has been shown by Isoyama et. al. [64,65], and recently by Nasipak and Evans [66], that the evolution of the Carter constant is crucial in order to describe the adiabatic evolution of EMRIs through resonances. Therefore, it could be interesting to include it in the case of electromagnetic case as well, but, we leave this for a future work. The above discussed discrepancies have as a result that the evolution from the same initial conditions are not giving the same time ∆t r that the inspiral spends in the resonance for the adiabatic and the self-force approach (Fig. 7). In order to obtain ∆t r for different initial conditions, we follow the prescription shown in Ref. [7]. Given an initial condition, we evolve the dissipative system for a sufficiently long time to obtain ∼ 10 3 to 10 4 points in Poincaré section. By referring to Fig. 4, we encounter similar structure in the Poincaré section which hints where and when the particle meets the resonance. Once we pin down the locations of the resonance, we note the coordinates of these points, along with the corresponding 4-momentum, energy, angular momentum and the value of the Hamiltonian. Afterwords, we evolve each of these points conservatively for a significant amount of time and evaluate the rotation number. We repeat this procedure for each initial condition given in Fig. 7. By looking at the values of ∆t r shown in Fig. 7 it appears that the time spent by the inspiral in the resonance to be qualitatively the same for both approaches. Even exotic cases, like those that the inspiral enters a resonance, but does not leave, seems to be reproduced both by a self-force and an adiabatic evolution. Note that this exotic effect is similar with the sustained resonances [67] appearing in EMRI studies [68]. Hence, the adiabatic scheme appear to be sufficiently faithful to the instantaneous self-force evolution. In Fig. 7 the initial conditions giving the "sustained" type of resonance crossings are indicated by two nearby vertical lines. These lines lie between a maximum and a minimum of the ∆t r (r) plot. The absence of these lines in the self-force evolution scheme through the 1 : 3 resonance (left plot of Fig. 7) is probably caused by its very small width. The minima of the ∆t r (r) plot correspond to inspirals crossing through the vicinity of the unstable periodic orbit of the resonance. The maxima of the ∆t r (r) plot correspond to inspirals entering sufficiently deep into the islands of stability (formed around the stable periodic orbit), which spend a considerable time period within the island before they exit the resonance. On the other hand, if an inspiral enters too deep into the island of stability, it becomes trapped by the resonance for a very long time which exceeds the integration time. VI. ASTROPHYSICAL RELEVANCE OF THE PARAMETERS The model was discussed in geometrized units scaled by the rest mass of the central black hole. In order to check the astrophysical consistency of the employed values we employ the relation between radiation parameter k, mass ratio η and specific chargeq given by Eq. (39). In particular, for our numerical examples we employ the value k = 10 −3 M , for which the mass ratio yields η ∼ 10 −1 (q) −2 . Fixing the mass ratio at the value relevant for EMRI systems as η = 10 −4 thus leads tõ q ∼ 1. Generally, the upper limit on the relevant value of the specific charge would be set by an electron with |q e | ∼ 10 21 in geometrized units. On the other hand, the theoretical limit on the charge of the static (Reissner-Nordström) black hole is |q RN | = 1. For the value employed in our analysis, |˜ | = |qB 0 | = 10 −3 M −1 , we may then retrieve the value of magnetic induction in physical units if M is specified: In particular, for a black hole in the center of M87 galaxy with mass M ∼ 10 10 M [69] we obtain (B) SI ∼ 10 2 T . This is several orders of magnitude more than the value derived from the recent observations of M87 with the Event Horizon Telescope [70], however, still within the range of realistic estimates for accreting black holes [71,72]. Dissipative trajectories studied in the present paper were evolved by Eq. (43) in which the contribution of the tail term was neglected. While the estimates presented in [61] justify such approximation for the case of a single charged particle such as electron near a magnetized black hole, we need to verify its validity for our scenario of an EMRI analogue. To proceed, we employ results from [73], where the self-force on the static point charge q of mass m near a (Schwarzschild) black hole of mass M is computed. In such system with no external electromagnetic field, the self-force appears solely due to the interaction between the field of the point charge and the black hole curvature and thus allows us to estimate the contribution of the tail term (the only part of the self-force which remains when the magnetic field is switched off in our model). The ratio Ψ between the self-force and the gravitational force (which remains dominant in our case as we set | | = 10 −3 ), is shown [73] to have its maximum close to horizon (namely at r = 3 M in Schwarzschild spacetime) and drops as Ψ ∝ 1/r farther from the black hole. In particular, the maximum value Ψ max (expressed by quantities in SI units) is given as: where k C and G are he Coulomb and the gravitational constants, respectively. For an electron and a black hole of one stellar mass we get Ψ max ∼ 10 −19 . For the most unfavourable EMRI case of η = 10 −4 andq = 1 (extremal Reissner-Nordström black hole) the ratio yields Ψ max ∼ 10 −5 which makes this contribution negligible even in this worst-case scenario, while for radii corresponding to our numerical examples this ratio reduces at least to Ψ max ∼ 10 −6 . The above analysis shows that our model and employed approximations are generally consistent with the conditions encountered in astrophysical systems. However, we stress that it is not proposed as a model directly corresponding to an EMRI and, in particular, the values of mass ratio formally expressed in Eq. (39) cannot be straightforwardly identified with the mass ratio parameter in EMRI system driven by GSF. Instead of modelling particular dynamic properties of an EMRI, the motivation of our analogue model is more general and our aim is to study fundamental properties of resonances affected by a non-integrable perturbation and the behavior of trajectories crossing such resonances due to dissipation caused by a self-force. Our setup allows us to test the reliability of the adiabatic approximation. In particular, in the present work we raised (and positively answered) the question whether the evolution of resonance-crossing trajectories might be reasonably approximated by the adiabatic (averaged) prescription for the dissipation ofẼ and L z . VII. SUMMARY AND DISCUSSION In this work we studied the dynamics of a charged body orbiting a magnetized Kerr black hole. This nonintegrable system bears some dynamical similarities with the system of a spinning body moving in the pure Kerr background. In particular, the trajectories in both systems deviate from the geodesics: in the first system, this is due to Lorentz force, while in the second due to spincurvature coupling. In both systems, the induced perturbation breaks the full integrability. In the first case, it is the presence of the magnetic field, while in the second, it is the spin of the secondary body, which makes the system non-integrable. In both systems there is a Carter-like constant, which holds up to linear order in the perturbation term and effects of non-integrability appear due to terms quadratic in the perturbation. This fact has recently been demonstrated for the spinning body [50,55], while for the charged body orbiting a magnetized Kerr black hole we showed that in Sec. II of the present paper. The above reasons make the latter system an interesting electromagnetic analogue of an EMRI, which allows to study the dynamics of the inspiralling body during the resonance crossing induced by the self-force. In our study we induced dissipation to the charged body using two approaches. First, we considered the instantaneous electromagnetic self-force without its tail terms. We evolved the system through a 1 : 3 and 1 : 2 resonances and studied the crossings of these resonances for various initial conditions. During the evolution of these crossings, we computed losses of the energy, the angular momentum along z and the Carter-like quantitỹ K. We noticed that although the energy and the angular momentum were changing relatively smoothly,K experienced an abrupt change due to resonance crossing. It is not clear why onlyK (and not the other constants) exhibits such behavior. Is it a feature of the electromagnetic self-force, which will not be reproduced in the GSF case? Further investigation is needed to determine the reason of this discrepancy. Since we calculated how the energy and angular momentum change along each trajectory, we were able to interpolate the time evolution of these quantities. Using these interpolations, we applied an adiabatic scheme to evolve orbits crossing 1 : 3 and 1 : 2 resonances. This allowed us to test whether the adiabatic scheme represents a faithful approximation of an instantaneous self-force. We were able to check that the adiabatic crossings of the resonance last for time intervals that are quantitatively comparable with those given by the instantaneous self-force. This does not mean that there are not discrepancies, like the presence of "sustained" resonances in the case of the 1 : 3 resonance, which occur in the adiabatic approximation, but not with the full self-force. However, this discrepancy might be an artifact of our adiabatic scheme in which the dissipation of the Carter-like constant is not prescribed. We plan to further investigate this issue and possibly optimize our adiabatic approximation. Nevertheless, the fact that we got faithful results regarding the resonance crossing duration without prescribing the dissipation of the Carter-like constant is remarkable and might also have an application in non-integrable systems where a Carter-like constant does not exist even to a linear order with respect to the perturbation. Our results strongly indicate that it is sufficient to adiabatically dissipate the system just through the energy and angular momentum, in order to find the correct times that resonance crossings last in EMRIs. Since we studied an analogue model driven by an electromagnetic self-force, any particular numerical values of the observable quantities (e.g., time intervals spent in resonances) are not directly relevant from the observational perspective of EMRIs. However, the main result of our analysis, which is the remarkable correspondence between the instantaneous self-force and its adiabatic approximation, is supposed to hold for a significantly broader class of non-integrable dynamical systems with dissipation. In particular, our results provide an indication that adiabatic approximation might be sufficient to faithfully model intricate dynamics of resonance crossing of an EMRI. For the upper panel, we set E(τ ) = E0 + a1τ + a2τ 2 , while for the lower pane, we set E(τ ) = E0 + a1τ + a2τ 2 + a3τ 3 . The upper case, the body seems to repeat the loop several times, while for the lower case, the body only cross it once. This example is for the initial distance r = 40.09966M ,Ẽ(0) = 0.98,Lz(0) = 3.8M , a = 0.5M , = −10 −3 , and k = 10 −3 . the adiabatic framework begins to encounter a problematic behavior. We demonstrate in Fig. 8 how the polynomial fit of energy and angular momentum may affect the adiabatic evolution. Fig. 8 shows the radial coordinate of every third crossing of the inspiral through the equatorial plane when π θ > 0 as a function of the proper time τ . Basically, we use a stroboscopic depiction of a Poincaré section as was discussed when introducing the top panel of Fig. 4. The orbit starts on the equatorial plane with π r = 0 and r = 40.09966M , but we use for aesthetic reasons the immediately next crossing through the Poincaré section to produce the stroboscopic picture in both the top panel of Fig. 4 and in Fig. 8. In the latter figure high peaks correspond to the part of the inspiral moving on KAMs away from the resonance, while low peaks correspond to the phase of the evolution spend in the resonance on an island of stability (see top panel Fig. 4). To reproduce the top panel of Fig. 8, we use a second order polynomial fit for the energy and the angular momentum. The multiple low peaks indicate that the inspiral spend a significant number of cycles in the resonance. However, this feature simply disappears if we consider a higher order polynomial fit, as depicted in the lower panel of Fig. 8. Hence, as we move closer and closer to the jump from maxima to minima, we need higher order polynomial fit to avoid such artifacts. Nonetheless, for some initial conditions, as shown within the dashed blue lines of Fig. 7, it is not possible to avoid inspirals being trapped in the resonance. This kind of entrapment seems to be a feature of the system, since we can see it happening also in the self-force driven evolution. For the second example, we again consider the adiabatic approximation close to the jump from maxima to minima. In Fig. 9, we provide an example of the adiabatic evolution such that it crosses a 1 : 2 resonance. We notice that the journey through the resonance is not smooth, and some points deviate from this plateau. This is not an numerical artifact and not present for the full self-force computations. However, we observe this feature for both 1 : 2 and 1 : 3 resonances when evolving the system adiabatically.
2022-06-22T01:16:40.538Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "7511df18b5f892d1d1db659cded32c1bccb9625f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "983e75f16653d0987777a3207dc9998e8e291af7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
91799134
pes2o/s2orc
v3-fos-license
Studying Speciation: Genomic Essentials and Approaches A genome comprises the entire genetic material of an organism and consists of DNA, which is in turn constructed of hundreds to billions of nucleotides. Nucleotides are organic molecules composed of three subunits: nitrogenous base, sugar (deoxyribose), and phosphate group. The DNA is differentiated into coding (genes) and noncoding regions. A gene is a speci fi c region of DNA that encodes a function. All genes present within an organism represent its genotype. The genotype determines the phenotype, which is, however, additionally affected by the environment and the individual development (ontogenesis). A gene may affect a single or several phenotypic features (pleiotropy). Likewise, a phenotypic feature may be affected by one or several genes, with the latter comprising polygenic traits. In the process of gene expression, the information encoded by a gene is used to generate a product. The expression of genes is regulated by the external (temperature, stress, resource availability) and internal environment (metabolism, cell division cycle), and the gene-speci fi c role in the respective tissue organism. Several processes underlying evolutionary change, e.g., mutation, genetic drift, gene duplication, selection, and migration, may change the genome at the level of single bases through genes to the organism. Such changes may result in population differentiation and eventually speciation. Molecular genetic studies on microevolution and speciation started with single genetic markers, e.g., the COI marker gene. Today, mainly genomic and transcriptomic approaches, making use of a large number of markers such as single nucleotide polymorphisms or microsatellites, are used to compare species, populations, and individuals. Structure of the Genetic Material The deoxyribonucleic acid (DNA) is constructed of hundreds to billions of nucleotides, which in turn are constructed of nucleosides. A nucleoside consists of a purine or pyrimidine base linked to a pentose sugar, whereby purine is a double-ringed nitrogenous base such as adenine (A) and guanine (G) and pyrimidine a single-ringed nitrogenous base such as cytosine (C) and thymine (T) (Fig. 3.1) or uracil (U). If a nucleoside is linked to a phosphate group on either the 5 0 or 3 0 carbon on the deoxyribose, it is called a nucleotide. Two pentose sugar molecules each of two different nucleotide monomers are connected through an individual phosphate molecule, resulting in nucleotides being connected to a long chain. Such a chain creates a single strand of DNA with one end of the chain having a free 5 0 and the other a free 3 0 end. Two antiparallel and complementary strands can be connected by hydrogen bounds between guanine and cytosine or adenine and thymine, respectively (Alberts et al. 2014). If both DNA strands are wound around each other in an opposite direction, this is called the DNA double helix (Fig. 3.1). In eukaryotes, such as plants, mammals, birds, and many more, which are organisms whose cells have a nucleus and other organelles that are enclosed by membranes, the DNA is organized into chromosomes ( Fig. 3.1) within the cell nucleus (plus another DNA molecule in each mitochondrion). Functionally, the genome is divided into genes, i.e., sequences of DNA that encode a single type of ribonucleic acid (RNA). Due to diploidy of eukaryotic organisms, gene loci occur twice in eukaryotic genomes, as one maternally and one paternally inherited copy. An allele is one of several alternative forms of a gene occupying a given locus on a chromosome. All genes of one individual, which were transmitted from its parents, make up the genotype. The genotype produces the phenotype, which is the collection of all observable traits of one organism, e.g., height and eye color (Lesk 2012). Three bases represent jointly a codon or triplet, and genes include a series of codons that are read sequentially from a starting point on one end to a termination point on the other end. Each triplet codes for a single amino acid in a corresponding protein. There are 64 (4 bases 3 nucleotides ) possible codons but only 20 naturally occurring amino acids. This means that several codons correspond to the same amino acid (Alberts et al. 2014). In contrast to the DNA, the RNA is evolutionarily older and has a different sugar (ribose) and a different base (uracil), which is replaced in the DNA by base thymine. The ribose makes the RNA less stable than DNA, and the production of uracil is less complex, because uracil is the unmethylated form of thymine (Alberts et al. 2014). 1 Each cell contains a nucleus with chromosomes. These chromosomes comprise nucleosomes to pack the genetic material in the nucleus. The nucleosome consists of a DNA section, which is wound around the histone. Certain parts of the DNA (genes) carry information for the cell to encode a specific function. The DNA is structured in a double helix and consists of the four bases adenine (A), guanine (G), cytosine (C), and thymine (T) (The graphic was modified for this book chapter and was used from the National Human Genome Research Institute (NHGRI); website www.genome.gov) Noncoding and Coding Regions The coding regions encode RNAs, which result in a protein, (messenger RNA, mRNA) or work directly in the case of other functional RNAs. The mRNA is a single-stranded RNA. The protein coding part(s) of a eukaryotic gene is/are the exon (s). The number of exons can vary. Introns separate the exons from each other such that the introns and exons alternate. After the splicing process, all introns will be removed and only exons remain. All exons are described by open reading frames (ORF), beginning with a start codon (ATG) and ending with a stop codon (Lesk 2012). Additionally, the 5 0 and 3 0 untranslated regions (UTR), which are the edges of the mRNA, do not code for parts of the protein. Introns tend to have a higher mutation rate than exons due to the fact that they do not encode part of a protein sequence. Thus, the sequence of an exon is more conserved than an intron sequence. Introns play an important role, because a single eukaryotic gene can code for several proteins, which can have different lengths due to alternative splicing. The noncoding regions are parts of the DNA, which do not encode functional RNAs. Noncoding regions consist of transposable elements (TEs), retroviruses, and long and short interspersed nuclear elements (LINEs and SINEs), among others. TEs are selfish genetic elements, which either copy or paste through an RNA intermediate or directly cut and paste in their DNA form (Kapusta and Suh 2017). An abundant transposable element in birds is the CR1 element. Until now, 14 CR1 families have been described in birds (Kapusta and Suh 2017). TEs can be classified in LINEs and SINEs. LINEs are autonomous retrotransposons and consist usually of two ORFs (Kapusta and Suh 2017). SINEs are non-autonomous non-long terminal repeat retrotransposons, which parasitize LINEs (in birds 6000-17,000 SINEs versus 1,500,000 in humans or less than 0.1% of all avian genome sequences) (Kapusta and Suh 2017). Another example for a noncoding region is the retrovirus, an RNA virus, which can convert its sequence into DNA by reverse transcription (explained in Sect. 3.2.2). Endogenous viral elements (EVEs) are retroviruses that rely on obligate integration into the host genome and can be classified as LTR retrotransposons (Kapusta and Suh 2017). Autosomes Versus Sex Chromosomes A chromosome contains part of the genetic material of a eukaryotic organism and consists of chromatin, which is a complex of DNA and proteins. Most of these proteins are histones (Fig. 3.1), which wrap up the DNA in the nucleus. The number and appearance of chromosomes is called karyotype. Eukaryotic cells can be present either in a diploid or haploid condition. The term haploid means that chromosomes occur in a single set, while diploid cells have a double set of chromosomes. Most eukaryotic organisms have diploid cells; thus, all chromosomes appear twice. However, eukaryotic organisms differ in the number of chromosomes, for example, humans have 46 chromosomes. Furthermore, chromosomes are divided into autosomes and sex chromosomes. Autosomes are pairs of chromosomes in a diploid cell, which have the same form, but each chromosome pair has a specific length. Humans have 44 autosomes and 2 sex chromosomes. Sex chromosomes differ from autosomes in length and function of their genes. They include the sex-determining region Y (SRY) gene on the Y chromosome. Furthermore, in humans, men have two different sex chromosomes, the X and Y chromosomes, while women have two X chromosomes. However, this XY sex-determination system is not present in all eukaryotes, in humans, most other mammals, several insects, some snakes, and a few plants. Another system is the ZW sex-determination system, which can be found in birds, several fishes, crustaceans, some insects, and reptiles. In the ZW sex-determination system, males have two Z chromosomes, while females have a Z and a W chromosome (Scanes 2015). Responsible for the sex determination in birds is probably a gene on the W chromosome, which is similar to the SRY gene on the Y chromosome. The Z and X chromosomes are larger and contain more genes than the W and Y chromosomes. Not only the sex chromosomes can be different in eukaryotic organisms, also the autosomes. In lizards, snakes, turtles, and birds, the autosomes can be divided in micro-and macrochromosomes (Matsubara et al. 2006;Ellegren 2013). Microchromosomes are tiny chromosomes with a length under 20,000,000 bp, while macrochromosomes are larger than 40,000,000 bp and resemble the mammalian autosomes in size. Characteristics of microchromosomes are that they include high rates of meiotic recombination, have high guanine-cytosine (GC) contents, short introns, high densities of genes, and cytosine-phosphate-guanine (CpG) islands, low densities of transposable elements and other repeats, but many repetitive sequences (Scanes 2015; Kapusta and Suh 2017). Another important aspect concerning chromosomes is the synteny, which describes the location of genetic loci on the same chromosome within an individual or species or even among species. Nuclear Genome and Mitochondrial Genome A common feature of eukaryotic organisms (except plants and other photosynthetically active eukaryotes) is that they have two genomes, the nuclear and the mitochondrial genome. The nuclear genome is organized in chromosomes as detailed above, while the mitochondrial genome is circularly or linearly organized and located within the mitochondria (that derived from bacteria). Generally, the nuclear genome is larger and contains many more genes than the mitochondrial genome. The Chicken Model: History and Overview An important model organism is the chicken Gallus gallus domesticus, because it is the first agricultural animal of which the genome was sequenced and has a relatively recent common ancestor with mammals. The ancestor of mammals and birds diverged 310 million years ago according to mitochondrial findings (Griffin et al. 2007). Furthermore, the chicken is the main laboratory model for over 10,828 extant bird species (Gill and Donsker 2017). The chicken genome has a size of 1.2 gigabases (Gb) (Lesk 2012), while avian genomes average around 1.35 Gb from the smallest, the Black-chinned Hummingbird Archilochus alexandri with 0.9 Gb, to the largest, the Common Ostrich Struthio camelus with 2.1 Gb (Scanes 2015; Kapusta and Suh 2017). The genome of the chicken was sequenced for the first time in 2004 with a 6.6 X coverage, using whole-genome shotgun reads. The resulting assembly of the chicken was composed of 933,000,000 bp and a genome size of 1.05 Gb (Hillier et al. 2004). In chicken, it is difficult to identify genes on the W chromosome as well as on the microchromosomes due to the high number of repetitive sequences. However, the Z chromosome is well explored and contains nearly the same genes in all birds and is therefore highly conserved among bird species. On the Z chromosome of the chicken are about 1000 genes located, which are absent from the W chromosome. The W chromosome is degraded to different extents in some bird lineages (Marshall Graves 2015). This is why it is smaller, poorer in genes, but richer in repeats in most birds (Scanes 2015). The avian karyotypes have been unusually stable during evolution, but there are some exceptions with chromosome numbers from 40 to 126 due to numerous microchromosomes (Griffin et al. 2007;Scanes 2015). A typical avian karyotype has a 2n of 76-80 (Ellegren 2013), and the chicken's haploid karyotype is defined by 39 chromosomes: chromosomes 1 through 10 are macrochromosomes, chromosomes 11 through 38 are microchromosomes, and the 39 th chromosome is the sex chromosome (Hillier et al. 2004;Ellegren 2005). In comparison to other eukaryotic organisms, the reduction of the avian genome size and transposable elements density, which is about 10% in avian genomes, began after the split of birds and crocodilians 250 million years ago (Griffin et al. 2007;Kapusta and Suh 2017). Thus, avian genomes are compact and were selected due to the evolution of flight (Hughes and Piontkivska 2005;Scanes 2015). In comparison with flightless birds, flying birds have a smaller genome. This might be due to the larger body size and longer generation times of flightless birds (Kapusta and Suh 2017). Birds expanded their repertoire of keratin genes such as feather and claw keratins, and retained genes for egg production (Scanes 2015). The chicken genome has several genes encoding egg-related proteins, which are not represented in the mammalian genome. These are examples for gene losses in the mammalian lineage. On the other hand, there are some genes in chicken and humans that might have changed their function. In contrast to birds, which excrete uric acid, mammals excrete urea. Concomitantly, it seems that the function of some genes is altered in mammals, because genes encoding the enzymes of the mammalian urea cycle are also found in the chicken genome (Hillier et al. 2004). The alignment of the chicken and human genome shows that at least 70,000,000 bp of sequence are likely to be functional in both species (Hillier et al. 2004). It is estimated that 20,000-23,000 protein-coding genes occur in the chicken genome (Hillier et al. 2004). In the human genome, some 20,000 genes have been detected until now. About 60% of protein-coding genes in chicken have a single human ortholog. From these conserved genes in human and chicken, 72% are also conserved in the Japanese pufferfish Takifugu rubripes genome. Thus, these genes are most likely present in most vertebrates (Hillier et al. 2004). 3.2 How Does the Genome "Work"? Replication of the DNA DNA replication is the process of copying DNA within a cell of an organism. The process starts with the opening of the DNA double helix by the enzyme DNA helicase at a specific position by breaking the hydrogen bonds. Both DNA strands of the DNA double helix serve as a template for the replication of a new complementary strand. After opening, the enzyme DNA polymerase adds complementary nucleotides one by one to the growing DNA chain. The unwinding and adding of new nucleotides to the growing chain stops, if it reaches a region, which is either already replicated, or if a protein binds to the DNA sequence to stop the replication. Afterward, the new DNA strands will be checked by proofreading to remove the mismatches. The results of the DNA replication comprise two DNA double helices with one old and one new DNA strand. Apart from a very small number of copying errors, the two daughter molecules are identical in sequence with the original DNA molecule. Transcription: RNA Synthesis When a cell needs a specific protein, the transcription of the respective gene (copying DNA into RNA) starts, which is followed by translation of the nucleotide sequence into the amino acid sequence. The transcription begins with the opening of a small portion of the DNA double helix and its unwinding to display the bases. The enzyme RNA polymerase performs the transcription and knows its target position through a promotor, which is a specific nucleotide sequence of the DNA (Alberts et al. 2014). The promotor is located before the coding region and regulates the expression of genes. One strand of the DNA double helix acts as a template for the synthesis of the mRNA ( Fig. 3.2). The sequence of the mRNA chain is defined by complementary base-pairing between free nucleotides and the DNA template. This DNA template is exactly complementary to the precursor messenger RNA (pre-mRNA). The transcription stops at a terminator, which represents the end of a gene (Alberts et al. 2014). Thus, the pre-mRNA is released. In eukaryotes, the pre-mRNA goes through several steps of processing such as polyadenylation, capping, and splicing. Polyadenylation adds a poly(A) tail to the pre-mRNA. This means that a specific number of adenine bases are added to the pre-mRNA. Capping of the pre-mRNA places a specific nucleotide and associated proteins to the 5 0 end to stabilize the mRNA. Splicing removes the introns-intragenic regions-from the pre-mRNA; therefore, only exonic sequences exist in the mature mRNA. Translation After these steps, the translation begins in the cytoplasm on ribosomes, which are complexes of proteins and ribosomal RNA (rRNA). RNA copies are used directly to synthesize the protein ( Fig. 3.2) (Alberts et al. 2014). The information of the DNA (or mRNA sequence) comprises the genetic code, which is read by small RNA molecules, the transfer RNA (tRNAs). The tRNA attaches to one end to a specific amino acid and displays at the other end a specific nucleotide triplet, the anticodon. This anticodon recognizes, due to base pairing, a codon in the mRNA. A stop codon is a nucleotide triplet, which has no corresponding tRNA (Alberts et al. 2014); thus, reaching the stop codon in the mRNA terminates translation. Proteins are important for development and functioning: They form parts and build the structure of an organism; perform metabolic reactions, which are necessary for life; participate in regulation as transcription factors and receptors; are key players in signal transduction pathways; and can act as enzymes to catalyze chemical reactions. Fig. 3.2 A gene in the DNA provides on the template strand the nucleotide sequence that is transcribed into RNA (change from base thymine to uracil). This synthesis of RNA based on DNA is known as transcription. After the transcription, the mRNA will be released from the nucleus to the cytoplasm. In the cytoplasm, the translation proceeds with synthesizing a peptide chain (protein) based on the nucleotide sequence of the mRNA. In this example, the peptide chain consists of methionine (Met), serine (Ser), cysteine (Cys), leucine (Leu), and a stop codon, which leads to the termination of the translation One Gene: One Function? Historically, it has been assumed that each gene encodes a single function. Today though it is well-known that one gene may have different functions. For instance, some genes encode only a subunit of a protein, because several proteins consist of polypeptides encoded by different genes. In other cases, genes do not encode polypeptides, but functional RNA molecules. Furthermore, genes can encode several proteins due to alternative splicing, which is a process following the actual transcription in eukaryotes. During alternative splicing, some exons can be excluded from the pre-mRNA. Thus, different proteins can be coded by the very same gene. This implies that one gene can influence more than one and even unrelated phenotypic features in one individual (pleiotropy). On the other hand, different genes may influence the same (polygenic) phenotypic feature in one individual. Categorical vs. Quantitative Traits A trait is defined as a feature of one individual, and this feature can be characterized by an attribute of the physical appearance (e.g., feather color) or a special behavior of the individual (e.g., alarm calls). These traits may be influenced by one or many genes. A categorical trait can be present or absent (e.g., feather crest), depending on the presence or absence of specific genes or alleles. Or in the case of multiple states, trait values can be categorized, for example, as white or black or yellow plumage. In contrast to categorical traits, quantitative traits show no categories, but continuous variation such as beak length. Phenotypic Plasticity The phenotypic variation encountered within and among populations may be caused by genetic or environmental factors. Genetically controlled phenotypic variation is caused by genetic polymorphisms, though not all genetic polymorphisms, e.g., at selectively neutral loci, are imperative for phenotypic variation. However, different phenotypes may alternatively persist within a population due to variation under certain environmental conditions (Pigliucci 2001). If such environmentally induced phenotypes result in different morphs, they are referred to as polyphenisms. Polyphenisms are, thus, the result of phenotypic plasticity, which is defined as the ability of a single genotype to produce different phenotypes in different environments (Pigliucci 2001;West-Eberhard 2003). Such changes include modifications of developmental processes as well as in adult phenotypes in response to environmental stimuli. As phenotypic plasticity may quickly change phenotypic traits, it enables an organism to respond to changing environments (Merilä and Hendry 2014). Environmental variation may induce changes in behavior, morphology, or physiology, which may be transient or irreversible. More importantly, phenotypic plasticity may be adaptive or reflect nonadaptive interactions between an organism and its environment (Pigliucci 2001). If adaptive, plasticity alters the fitness of an organism under specific environmental conditions. Consequently, phenotypic plasticity may play an important role in adaptive evolution (Fusco and Minelli 2010). It may, for instance, shield genotypes from selection, thus slowing down evolutionary rates, or, alternatively, facilitate adaptive evolution through genetic assimilation of environmentally induced phenotypes (Ghalambor et al. 2007). Furthermore, note that effects of genes and the environment may be easily confused. Some environmental conditions may produce phenotypes similar to those produced by genetic factors and vice versa. Finally, both environmental conditions and genetic constitution interact with one another to generate the best adapted phenotype (Fusco and Minelli 2010). How Does the Genome Evolve? 3.3.1 Modification of the DNA DNA methylation is a chemical modification of chromatin. In the methylation process, small molecules (i.e., methyl groups consisting of one carbon atom and three hydrogen atoms) attach to the DNA. If a methyl group is attached to a part of a gene, the gene will be turned off. Modifying the wrong gene or other failures can result in abnormal gene activity or false inactivity of a gene. These errors in the epigenetic processes can lead to, for example, cancer and metabolic disorders. Epigenetics examined why the expression of the gene is activated at a specific time in the development of an organism. Furthermore, it describes an inheritable phenotype, which is created from changing chromosomes without alterations of their DNA sequences (Toraño et al. 2016). Not only DNA methylation but also acetylation and phosphorylation can result in similar changes. Acetylation is the reaction of an acetyl functional group into a chemical compound, while protein phosphorylation is a modification of a protein by which an amino acid is phosphorylated by the addition of a phosphate group. Both modifications are important in biological regulation such as gene and enzyme regulation. Another type of modification affects the histones by the attachment of chemical compounds. These chemical compounds can be used by other proteins to decide, if a DNA sequence should be active or ignored within in a specific cell. Covalent histone modifications generate or stabilize the location of specific binding partners to chromatin, while non-covalent mechanisms provide the cell with further tools for introductory changes into the chromatin template. Chromatin remodeling and inclusion of specialized histone variation are examples for non-covalent mechanisms. However, covalent and non-covalent mechanisms can also be combined (Goldberg et al. 2007). Mutation Biological diversity would not exist without some degree of error in the hereditary process. Such errors occur from the higher level of karyotype down to the DNA base sequence. The rate at which changes occur in DNA sequences is defined as mutation rate (Alberts et al. 2014). A mutation in which one pyrimidine base is replaced by the other or in which one purine base is replaced by the other is called transition. A transversion is a mutation in which a purine base is replaced by a pyrimidine base or the other way around. A point mutation changes only a single base. A synonymous mutation appears, if the substitution does not change the amino acid sequence of the polypeptide product. This is a type of silent mutation, which is more likely to be fixed by drift. A non-synonymous mutation in a coding region does change the sequence of the amino acid and therefore the polypeptide product. This could result in either the production of a different amino acid or a nonsense or termination codon. Selectively neutral mutations occur, when changes in coding regions have no effects on the phenotype. Further modes of mutation include insertions and deletions of a single base or a sequence of bases. An insertion can be reverted by deletion of the inserted sequence, but a deletion of a sequence cannot be reverted in the absence of some mechanism to restore the lost sequence. There have been very precise repair mechanisms for billions of years, but some mutation rate remains. In most cells, this only affects the actual individual, but in germline cells (egg and sperm cells and their precursors), this leads to genotypic changes in the offspring and potentially their phenotype. Selection Selection is a process, which acts on the phenotype and can benefit individuals with a certain feature or genotype. This leads to the spread of traits beneficial to survival and reproduction while eliminating detrimental ones. Individuals with advantageous traits have a higher chance to survive and produce more offspring than individuals with unfavorable traits. A negative or purifying selection eliminates new mutations, because the phenotype is negatively affected by the mutation. If an individual with an advantageous mutation survives, it can produce more fertile offspring than individuals without the mutation (positive selection). Sexual selection is an individual's choice of mates of the other sex from the same species by preferring a presumably advantageous feature. This often led to an arms race (e.g., in plumage coloration, song variability). Genetic Drift Genetic drift is a random change in the frequency of a heritably gene variant (allele) in a given population. It may occur in all populations, but its strength is strongly dependent on population size: The smaller the population, the larger the effect of genetic drift. Thus, genetic drift does not depend on specific alleles, either beneficial or harmful. Genetic drift may even lead to the fixation of a harmful allele or the disappearance of beneficial alleles and generally reduces the genetic variation within a population or species. Genetic drift is often associated with founder effects (e.g., settlement on a small island) and population bottlenecks (e.g., glaciation reducing the inhabitable area), owing to the concomitantly reduced population size. Geographic Variation and Dispersal The physiological or morphological variation based on genetic features between populations of the same species in its whole range is called geographic variation. Geographic variation may often result from local adaptation, with specific genetic factors of a population being favored by natural selection. Dispersal means the range expansion of a population by individuals that are adapted to new habitats or places. Recombination and Migration The exchange of genetic material either between chromosomes or different regions on the same chromosome is called recombination. Recombination creates new combinations of alleles and genes and gives rise to much of the genetic variability within populations due to different combinations in offspring compared with their parents. In sexually propagating organisms such as birds and humans, it occurs in every generation during the preparation of the germ cells (eggs and sperms). This forms the basis for adaption to changing environmental conditions. Migration is defined as the change of gene frequency by introducing new allele or more copies of one alleles into a population by a migrant. Gene Duplication A duplication of a DNA section, which contains a gene, is defined as gene duplication. Gene duplication can occur during the processes of DNA replication and recombination or when an mRNA is converted back to DNA and new genes integrate into the genome. Gene duplication may allow for the development of a new function. Gene duplication may affect a phenotype, e.g., copies of a gene can lead to a surplus of the gene-specific protein, because the amount of a synthesized protein is regularly proportional to the present number of gene copies (Clancy and Shaw 2008). Another type of duplication is the duplication of whole chromosomes. This process can occur during cell division when the chromosomes do not separate correctly between the two cells. How to Study Speciation Using Genomic Features? The first molecular markers for species delimitation and taxonomy were isozymes and allozymes. Isozymes describe different molecular forms of an enzyme, which are encoded by different loci. In contrast, allozymes characterize different molecular forms of an enzyme, produced by different alleles at the same locus (Duminil and Di Michele 2009). The term locus refers to a specific position of a gene, while the term gene is related to a DNA section, which contains the information to produce an RNA molecule. The principle approach when using allozymes or isoenzymes is to identify the variation of an enzyme among individuals using electrophoresis. However, nowadays almost exclusively DNA markers instead of protein markers are used for speciation studies because of low resolution due to synonymous mutations. AFLP studies use restriction enzymes, which digest genomic DNA, followed by the ligation of adapters to the sticky ends of the restriction fragments. A selection of the restriction fragments will be amplified with polymerase chain reaction (PCR) primers, which have a corresponding adaptor and restriction-site specific sequences. Afterward, the amplicons will be separated through electrophoresis on a gel and visualized. RFLP is a technique, which starts with the cutting process of DNA fragments by restriction enzymes, followed by a gel electrophoresis to order the DNA fragments by their length. RAPD is a special method of the PCR, because it uses short primers and the results are random DNA sequences. The gel electrophoresis shows individual patterns. PCR-Based Molecular Markers There are codominant molecular markers, which can be used for species delimitation and taxonomy. For most of these markers, the PCR method is used to multiply a specific DNA sequence of a sample. The method starts with the denaturation of the double-stranded DNA into single strands, called templates. Short DNA sequences, which are generally 18-20 bp long and are known as primers, bind to the templates. This step is called annealing. The next step is elongation, in which the enzyme DNA polymerase synthesizes a new DNA strand, which is complementary to the template, by adding free nucleotides to the single DNA strand. Afterward, the annealing and elongation are repeated in a definite number of cycles, until enough target DNA sequences are available (Semagn et al. 2006). This method can be used to sequence and analyze different DNA sequences for a variety of scientific questions. Ribosomal Genes The nuclear rDNA encodes rRNA, and both contain highly conserved and variable domains, which is a good condition for analyzing phylogenetic relationships (Hwang and Kim 1999;Patwardhan et al. 2014). The nuclear small subunit (SSU) rDNA is a highly conserved region of the DNA, which has been used for the reconstruction of phylogenetic relationships in kingdoms, phyla, classes, and orders. The nuclear large subunit (LSU) rDNA contains more variation than the SSU rDNA, and the size of its genes varies among phyla. The LSU rDNA is used for studying genetic relationships in orders and families (Hwang and Kim 1999). Further highly conserved regions like the nuclear SSU rDNA are the 12S and 16S rDNA. They encode the ribosomal RNA, which is part of the small ribosomal subunit of a ribosome in a mitochondrion. The 12S rDNA has been used to study the phylogeny of phyla and subphyla, while the 16S rDNA has been used for analyzing the phylogenetic relationships within families and genera, because the 16S rDNA is more variable than the 12S rDNA (Hwang and Kim 1999). Mitochondrial DNA Markers Due to the fact that the mitochondrial DNA evolves faster than the nuclear genome, mitochondrial protein-coding regions have been used for analyzing the phylogenetic relationships within families, genera, and species (Hwang and Kim 1999). The first mitochondrial marker used was the control region, which is located in the noncoding region and is part of the regulation and initiation of the mitochondrial DNA replication and transcription (Patwardhan et al. 2014). The mitochondrial control region is variable in size and contains many variations also between individuals of the same species. Thus, it is used for studying genetic relationships in species, subspecies, and populations (Hwang and Kim 1999). The second mitochondrial marker was the cytochrome oxidase I/II (COI/II), which is a well-known protein of an electron transport chain. In the cytochrome c oxidase complex, the COI and COII genes code for two polypeptide subunits. Both have been used for phylogenetic relationships among orders, families, subfamilies, genera, and species. The sequence of the COI gene is one of the sequences that can be used as a barcode for the identification of species (Patwardhan et al. 2014). DNA barcoding is a method to identify species by using short sequences. Further widely used mitochondrial markers to reconstruct the phylogeny among genera and species are the cytochrome b (cytb) and NADH dehydrogenase 2 (nd2) genes. Microsatellites A microsatellite is a specific DNA motif with a length of two to six base pairs (Fig. 3.3). Microsatellites are used to detect the number of repeats of a sequence to identify an individual. Similar to microsatellites are minisatellites, but their repeat motifs are longer. Microsatellites can be amplified by PCR, for which labeled primers are needed, followed by analyzing the length of the fragment (microsatellite). A large advantage is the small amount of DNA needed for the PCR. Microsatellites are locusspecific, codominant, and highly polymorphic. A disadvantage of microsatellites is their taxon-specificity. Thus microsatellite libraries need to be generated for each species or closely related sister species (Delaney 2014). Microsatellites are currently mainly used for paternity tests and population genetics but hold large potential for speciation studies due to their potential to distinguish lineages within a species. It is necessary to work with more than one microsatellite locus to have reliable results. Expressed Sequence Tags Genes must be converted into mRNA, but RNA is unstable outside the cell. Hence, mRNA needs to be converted into complementary DNA (cDNA) by the reverse transcriptase enzyme. The production of cDNA is the reverse process of transcription, because mRNA is used as the template instead of the DNA. cDNA is more stable than mRNA and contains generally only exons due to splicing of the pre-mRNA. This means that cDNA represents an expressed gene or a part of it. When the cDNA has been isolated, various nucleotides can be sequenced to create expressed sequence tags (ESTs) with a length of 100-800 bp. They allow the discovery of unknown genes and a comparison between different species due to high conservation in the coding regions (Semagn et al. 2006). From ESTs it is possible to develop primer pairs for sequencing genes in other species and to detect single nucleotide polymorphisms (SNPs) (Schlötterer 2004;Semagn et al. 2006). Single Nucleotide Polymorphisms A single nucleotide polymorphism (SNP) is the change of a single base in the DNA sequence (Fig. 3.3) (Semagn et al. 2006). Generally, two different nucleotides can be found per position, and SNPs mostly occur in noncoding regions (Grover and Sharma 2016). The simplest method to identify SNPs is to screen a high-quality DNA sequence or an EST. The most common methods like restriction-site-associated DNA sequencing (RAD-seq) and genotyping by sequencing (GBS) will be explained in the following two sections. A comprehensive strategy for detecting SNPs in a genome is the generation of shotgun genome sequences. For this method, a pool of DNA from different individuals should be sequenced. A more efficient approach is the shotgun sequencing with a reduced section of the genome, in which the DNA of many different individuals can be sequenced (Schlötterer 2004). Most of these methods are cost-and time-intensive and the information content of one SNP is very low, but they have a low mutation rate (high stability) and high frequency in the genome, and new analytical methods are being developed and open up new opportunities. SNPs can be used in different research questions, e.g., investigate about natural selection across species (Künstner et al. 2010 Restriction-site-associated DNA sequencing Restriction-site-associated DNA sequencing (RAD-seq) is the genotyping of short DNA fragments, which are adjacent to the cut site of a restriction enzyme (RE). The first step of RAD-seq is the digestion of the genomic DNA with a chosen RE, followed by the ligation of an adapter (P1) to the overhang of the RE (Baird et al. 2008;Davey and Blaxter 2011). This adapter contains a binding site for the forward primer and a barcode for the sample identification. After ligation, the fragments are pooled and size selected (Baird et al. 2008). The DNA fragments are then ligated to a second adapter (P2), which has a reverse primer site and is a Y adapter with divergent ends (Coyne et al. 2004;Baird et al. 2008). The reason for choosing a Y adapter is that all fragments contain the P1 adapter, because the P2 adapter cannot bind to the reverse primer, before the amplification of the P1 adapter has been Fig. 3.3 Ten individuals from one population are represented with fictional sequences. In these sequences, two single nucleotide polymorphisms (SNPs) and one microsatellite occur. The first SNP is a variation of the bases cytosine (C) and thymine (T). The individuals 1, 2, 4, 8, and 9 carry base C, while the other individuals have a T at the same position. In the individuals 1, 3, 6, 9, and 10, an adenine (A) appears as the second base, whereas the individuals 2, 4, 5, 7, and 8 have the base T on this position. The microsatellite in this example is a repetition of two bases, C and A. In the individuals 1, 2, 5, 8, 9, and 10, it is 12 bases long (CA) 6 . In the individuals 3 and 4, the microsatellite is 14 bp long (CA) 7 , while it is shorter in the individuals 6 and 7 (CA) 5 finished (Baird et al. 2008;Davey and Blaxter 2011). After ligation of the second adapter, a PCR reaction is performed. The PCR-products are used for next-generation sequencing (3.4.7) (Baird et al. 2008). The resulting reads are trimmed, grouped by barcodes, and mapped to a reference genome or, if no reference genome is available, the same reads are aligned for identifying SNPs (Baird et al. 2008;Davey and Blaxter 2011). The challenges of RAD-seq are the high costs of sequencing and the diversity of RAD-seq protocols with different technical details. Nevertheless, one can choose the protocol most suitable for the own study system or research question (Andrews et al. 2016). RAD-seq can identify and generate thousands of genetic markers, reduces the complexity of the genome, and can be used for species with no or limited existing sequence data (Davey and Blaxter 2011). Furthermore, RAD-seq was extended to use two REs instead of one RE to exclude the step of size selection. This method is called double digest RAD-seq (Peterson et al. 2012). Genotyping by sequencing Genotyping by sequencing (GBS) is a highly multiplexed approach for constructing reduced representative libraries for the Illumina next-generation sequencing platform to discover a large number of SNPs. This approach can be used for any species at a low per-sample cost and also incorporates restriction enzymes (RE) to reduce genome complexity (Elshire et al. 2011;Chung et al. 2017). The procedure of GBS like RAD-seq starts with the digestion of DNA by an RE. The selected REs should be suitable for the investigated species by containing an overhang of two to three base pairs, and REs do not cut frequently in the major repetitive fraction of the genome. After the digestion, two adapters are ligated to the ends of the digested DNA. The adapters should be complementary to the overhang of the chosen RE, and one adapter contains a barcode for multiplex sequencing. These adapters contain binding sites for appropriate primers, which are added to perform a PCR reaction to increase the amount of DNA fragments. The PCR products are cleaned up and DNA fragments with a specific size result in a library. Libraries are used for sequencing, followed by filtering reads, which match one of the barcodes and the corresponding cut site of the RE, and are not adapter dimers. These sorted reads are separated by their barcode and after separation the barcode is removed. The filtered reads are mapped to the reference genome, consequently reads, which mapped on the same position are aligned and used to identify SNPs (Elshire et al. 2011). GBS is a costeffective method to discover SNPs, genotype individuals within a population, and detect molecular markers. The disadvantages are the management of big datasets and the fact that the data do not represent the whole genome, which could have a negative effect on constructing genetic maps (Chung et al. 2017). Transcriptomics This is a technique to study an organism's transcriptome, which is the total of all its RNA transcripts. The transcriptome is a snapshot at a specific time of all transcripts in one cell or tissue, for a specific developmental stage. These expressed genes of one organism in different cells, tissues, conditions, or time points give details about the function of uncharacterized genes and the biology of organisms. Furthermore, the comparison of transcriptomes allows the identification of genes, which are expressed in different cells; hence, it gives information about gene regulation. There are two techniques to create a transcriptome: microarrays and RNA-Seq. The microarray approach quantifies a set of predefined sequences, while the RNA-Seq technique uses next-generation sequencing to target "all" expressed genes (Wang et al. 2009). "Whole" Genome Sequencing Next-generation sequencing (NGS) is a method to produce a large number of reads of short DNA sequences, between 50 and 150 bp long. The read length of NGS is often short with a high error rate, but this is compensated due to a higher coverage of the consensus sequence (Scanes 2015). These reads can be combined to continuous sequences (contigs), and contigs can be in turn linked to scaffolds. Indications about the quality of contigs and scaffolds (genome assemblies) can be provided by the N50 value, which represents the minimum length of long sequences that make up half of the assembly of contigs or scaffolds (Kapusta and Suh 2017). Contigs and scaffolds can be used to identify genes, but there are sequences which have no genetic information, which are clustered in chromosome Unknown (chrUn). Annotation is the process of linking DNA reads to information available from previous work (on other taxa) (Scanes 2015). Different Strategies for Sequencing Genomes The traditional Sanger sequencing with 1-kb-long sequence reads and the Roche 454 sequencing with up to 800 bp sequence reads have been largely replaced by short-read technologies such as Illumina HiSeq with 150 bp sequence reads. There are also even newer technologies available such as Pacific Biosciences with up to 5 kb sequence reads or Ion Torrent with about 500 bp sequence reads (Ekblom and Wolf 2014). The technology of 10Â genomics uses short reads from Illumina sequencing to link the short reads to long molecules. In the long molecules, variation can be detected to identify which reads belong to the father or mother of the examined individual. Another method uses single molecules by detecting them and sequencing their DNA. This is called single-molecule genomics. One of the most common strategies for genome sequencing is the shotgun sequencing. First, DNA is cut into small random fragments, whereby the size of the fragments depends on the technology used. These fragments will be assembled to a longer contig. This process is known as de novo assembly. It is important that there is enough overlap between the sequence reads for a correct assembly, and this implies also a high coverage. If there are longer fragments like several hundred base pairs, both ends of the sequence will be sequenced called paired-end sequencing. Afterward, the resulted contigs are connected to longer sequences (scaffolds) (Ekblom and Wolf 2014). The genome annotation uses the whole genome sequences in combination with relevant information from gene models, functional information, microRNA, or epigenetic modifications. Consequently, a lack of genomic information will result in low annotation rates. Annotation describes the process of using data of other genomes or transcriptomes to detect genes or transcripts on the newly assembled genome (Ekblom and Wolf 2014). Limitations of Analyzing Genomes Usually, a genome draft represents the complete nucleotide base sequence for all chromosomes in one species. Nevertheless, there is not just one sequence for a species, due to individual genomic variation, differences among cells within individuals due to diploidy. Thus, the assembled reference genome sequence of one individual will only comprise a subset of the total variation present within a species. Typically, one individual is sequenced, but sometimes a genome is based on a consensus of a few individuals (Ekblom and Wolf 2014). Furthermore, it is not possible to sequence and assemble all nucleotides in the genome due to sequencing errors (Scanes 2015), and most genome assembly methods fail on repetitive elements, which are typically not included in reference genomes (Hoban et al. 2016). However, repetitive regions may be characterized through the annotation of a comprehensive dataset compounded of a high-coverage single molecule real-time sequencing assembly, an assembled optical map, and a generated high-coverage short-read sequence assembly to a repeat library (Weissensteiner et al. 2017). Epigenome In almost all cells of an individual, the same DNA sequence can be found, but nevertheless cells may differ as the information content encoded within the DNA may be used differently. Such differences may arise from chemical modifications of the DNA or histone proteins without changing the DNA sequence. The resulting epigenome includes chemical compounds, which have been added to the DNA to regulate gene activity. These chemical compounds are not part of, but fixed to, the DNA. Epigenomic changes occur in individual development and tissue differentiation and may result in cell division, and, in some circumstances, they can be transferred to the next generation. However, the epigenome can also be influenced by environmental conditions, such that the epigenome may vary between individuals. Through epigenetic changes, genes can be turned off or on (expression), thus determining the production of proteins in specific cells. For example, the eye is specialized for light-sensitive proteins and red blood cells for carrying oxygen. Furthermore, epigenetic changes in DNA and histones play a role in regulatory pathways of eukaryotes (Marshall Graves 2015). Closing Words Speciation is one of the main focusses in evolutionary biology and also the starting point to clarify the relationship between species. Morphological traits and reproduction are important for characterizing one species, but over the last decades, genetic tools got more and more influence in the delimitation of species. Therefore, it is necessary to understand the structure and function of the used genetic material. The genetic investigation of speciation began with short sequences and few genes for small sample sizes. Nowadays, more individuals of one species and additionally more species can be examined. Furthermore, SNPs, transcriptomes, and whole genomes are the newest traits to analyze and understand speciation-also in functional respect. However, methods will be further developed to become more costeffective, faster, and more informative.
2019-04-03T13:08:27.067Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "233aef2d479346a58f7066799c5da4ff8500372c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-91689-7_3.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2e2e80304e360004dd6ccc11b6d1864db129cbd6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
265294688
pes2o/s2orc
v3-fos-license
Comparison among Classical, Probabilistic and Quantum Algorithms for Hamiltonian Cycle problem The Hamiltonian cycle problem (HCP), which is an NP-complete problem, consists of having a graph G with n nodes and m edges and finding the path that connects each node exactly once. In this paper we compare some algorithms to solve a Hamiltonian cycle problem, using different models of computations and especially the probabilistic and quantum ones. Starting from the classical probabilistic approach of random walks, we take a step to the quantum direction by involving an ad hoc designed Quantum Turing Machine (QTM), which can be a useful conceptual project tool for quantum algorithms. Introducing several constraints to the graphs, our analysis leads to not-exponential speedup improvements to the best-known algorithms. In particular, the results are based on bounded degree graphs (graphs with nodes having a maximum number of edges) and graphs with the right limited number of nodes and edges to allow them to outperform the other algorithms. Introduction A Hamiltonian cycle is a cycle in an undirected or directed graph that visits each vertex exactly once.Hamiltonian Cycle problem (HCP) is the problem to determine if a Hamiltonian cycle exists in a given graph. HCP is in NP, and precisely it is NP-complete.Shortly, the first proposition indicates that while it is almost impossible to find an efficient solution for HCP, it is straight in polynomial, often linear, complexity time to control if a given item is a solution or not.Obviously, generating and checking all possible solutions to find a correct one has exponential time complexity, and it is very difficult to obtain with little error margin a solution for them in another way, heuristics for example. xxxx JQC, 2021, vol.xx, no.xx NP-complete problems are a class of decision problems within NP for which any other problem in NP can be reduced to them in polynomial time.In other words, if you could solve any NP-complete problem in polynomial time, you could solve all problems in NP in polynomial time.Therefore, a polynomial solution, more precisely an exact solution, given to HCP in its general formulation, would imply that , resolving one of the more discussed problems in Computer Science.Here we do not have a similar aim, but we want only to compare some algorithms to find the exact solution with the different computational models, which are the deterministic, probabilistic and quantum ones. A simple deterministic solution to the HPC, for example, is the following: given a Graph with vertices, generate all permutation of its vertices which begin with 1, and this phase has complexity , where the is for the factorial function; check each permutation: meaning that we have to test if each vertex in the permutation is connected to the next by an edge, and that the last of the sequence is connected by an edge with that numbered 1; for each test we can assume complexity . So, the whole complexity is that can be approximated to by the known Stirling formula [1]. In the simplest scenarios, that involve small graphs, this class of solutions is broadly used [2][3].With larger graphs, though, a deterministic approach is computationally infeasible.Therefore, probabilistic and quantum algorithms come into play [4]; they do not guarantee finding a Hamiltonian cycle if one exists, but they can be effective in practice.In fact, the trade-off is between computational time, and the quality of the solution; these methods can provide reasonably good solutions within a reasonable time frame for many real-world instances of the HCP.The algorithms also differ by the class of graphs they are appliable to, because some of them are designed to give better solutions on HCP graph models with several constraints (such as the degree of nodes) than the ones that work on every graph.We will see some of these approaches in the Related Works section. Overall, in this article we compare the most relevant algorithms and results between them and with others designed by ourselves, using HCP as a benchmark and a hint to compare different approaches and models of computation. Related Works We know [5] that for the sparse locally connected graphs, HCP is solvable in polynomial complexity time.Sparse locally connected graphs are graphs for which , where are the edges, are the nodes or vertices and is the degree of each vertex, meaning the number of edges entering the node.Furthermore, if a graph is constructed so as to have , with , where is the number of adjacent nodes and with the condition and integers, and if is , that is each node has degree , then for , HCP is in P as shown in [6]. In [7], they design a distributed algorithm that with high probability computes a Hamiltonian cycle in a random graph with , where is the number of the vertices and each possible edge occurs independently with probability .They compute HCP with high probability for this graph class by using nodes as cooperative computing elements and reach the time complexity of .Obviously, they do not obtain this result for a generic graph . Another way to use probability in HCP solutions is to consider random walks in graphs.This type of research implies to construct a geometric polytope based on the graph to be examined, and using its extreme points, meaning points that do not lie in any open line segment joining two points, as starting point to analyze.Very positive experimental results are shown in [8] by using this type of approach.In [9], they show a quantum computing model in which Hadamard gates are used to obtain all permutations of vertices in a superposition way.Then the Grover algorithm is applied to search for an eventual Hamiltonian cycle.So, in [9] the authors solve HCP for a general graph with high probability, but only a quadratic speedup is obtained compared to the deterministic model of searching classically among all possible solutions.The complexity time of this solution remains exponential. As they show in [10], for an adiabatic quantum computing model, it is possible to use a particular methodology to obtain some code effectively testable on an existing prototype, the so-known D-Wave.They map the Hamiltonian cycle problem in QUBO problem, the mathematical algorithm for which D-Wave is designed.However, they don't observe an exponential speed-up over the number of nodes of the graph.They also underline problems with noise preparing and testing their examples. Another quantum information technique for graph algorithms is quantum random walks.For example, in [11] quantum random walks are described for very particular graphs, such as hypercubes and a few tree-based graphs.However, using quantum walks for a generic graph was proved impossible [12]. Based on all these results, we reached the conclusion that very significant results in computational time are reached by restricting the graph classes on the problem instances. The brute force algorithm presented in the introduction has a time-complexity of .For a generic graph, only some algorithms can outperform it.To introduce them, we also must consider that HCP can be reduced to the Traveling Salesman Problem (TSP).The latter is the problem of finding the minimum cost Hamiltonian cycle in a weighted graph.This is computed representing the graph as a cost matrix , in which the element is the cost of the edge from node to node .Then, it is possible to solve an HCP with an algorithm for TSP, putting in the element of the cost 1 if the edge from to exists, and 2 or more otherwise, and then verifying if the total cost of a TSP solution is equal to the number of nodes of the given graph, or not.Finally, we can apply Bellman-Held-Karp algorithm [13], which has complexity , or the algorithm designed in [14].This latter is a quantum algorithm with complexity . In addition, many heuristics and strategic approaches exist for TSP, such as Ant Colony Optimization Algorithm, Particle Swarm Optimization Algorithm, Artificial Bee Colony Algorithm and recently Genetic Algorithm [15][16], but they do not find the exact solution efficiently; they only search for one solution near to the optimal.Alternatively, also for TSP, as for HCP, quantum annealing algorithms have been proposed [17][18][19] but they still have too many limitations regarding noise and number of nodes of the graph that have to be elaborated [17]. In the following sections we will present our proposed probabilistic and quantum algorithms. A probabilistic approach for HCP Using a random walk approach on a graph , we consider a memory-less path in which, if at the step we are at a node the probability to move to a neighbor is , where is the degree of the vertex .Obviously, the sequence of random nodes is a Markov chain.Now we xxxx JQC, 2021, vol.xx, no.xx add some conditions to the random walk for each single trial, as shown in the following sample of code.Let be the number of the nodes of . Then we show: 1. // Legend: 2. // Node is the class of every node 3. // is a public member(property) of the node a 4. // == 1 means that there is an edge between node a and b, == 0 means there isn't 5. // == 1 means that the node a has already been visited 6. // is the number of unmarked(unvisited) neighbors of node a 7. // is the number of neighbors of node a 8. // is an array of pointers to every neighbor of node a 9. // chooseRandomNode() is the function that takes a node and returns the next one according to our approach 10. // getRandom() is a function that returns a random index in an array with a weight based on its values (in this // case the probability) Follows: (1) Theorem 1. Consider a random walk on a graph starting at node v 1 that follows the stop conditions stated in the above algorithm. Let the degree of , that is the number of vertices of adjacent to . Let be the number of vertices of adjacent to , but already visited. Their difference is equal to the number of adjacent vertices to not visited yet (numUnmarked of ). Then, if the condition of Exit (line 30) is true then is a Hamiltonian cycle for and its frequency, as the number of trials tends towards infinite, is: Proof.The first part is simple to prove.If the condition in line 30 in the algorithm above is true, we have arrived at the step traversing all nodes different from each other.Furthermore, if there is an edge between and , means that there is an edge between the vertex visited at step and the one visited at step 1 numbered 1 as given.So is a Hamiltonian cycle. We now prove the second statement of our theorem.It simply follows from the relative marking rule (line 29) of the algorithm above.The probability is calculated as product of independent events probabilities.Our algorithm's iteration steps depend on one another.Still, this dependence is already implicit in the term inside the probability designed in it, because at each step we are marking nodes, virtually increasing the term of some node.So, for this the probability that the second step is is and the probability that at the visited nodes are is .Note that is because we haven't visited any nodes yet, so we write instead of .The probability that the random walk visit the sequence is so already given by the final formula of the theorem, and if there is also an edge between the last visited node and the first it is certain that it is true also for the Hamiltonian cycle . Corollary 1. Considering the rate of success of our algorithm for each Hamiltonian cycle in the graph, and given that each random walk is as time complexity, we can deduce that an upper bound of its expected time complexity is: where, for the sake of simplicity we chose . Furthermore, if L is a lower bound to the number of Hamiltonian cycles of in the case that is Hamiltonian, then its expected time complexity is: ). ( Proof.Given , considering that the complexity is the inverse of the success probability, and by considering the random walk from every node (hence the factor ), the first formula of the above corollary follows.The second formula instead follows from the first and the corrected hit rate, based on a supposed lower bound of the number of Hamiltonian cycles in the graph, which allows us to increase the probability of finding one, therefore improving the time complexity. We have designed in such way a probabilistic algorithm that, with high probability, solves for a generic graph with a better time complexity than classical brute-force deterministic algorithm presented in the introduction assuming simply that: However, we remain in an exponential time complexity. Comparing our algorithm to the quantum computational search algorithm shown in [9], we also can do better but only with the following stronger assumption: f r i n. ( We are not discussing here the advantage of decreasing the degree from to during the random walk.Consequently, our study of time complexity is valid also for perfectly Markov random walks, not based on previous choices.Furthermore, to outperform respectively the best-known classical [13] and quantum [14] algorithm for HCP, the following conditions must be true: ; . ( 7) The quantum version of our probabilistic algorithm In this chapter, we show a few quantum versions of our algorithms in a QTM format, as we already did for some known algorithm in [20]. We give the following definition of a Quantum Turing Machine [20][21]. A Quantum Turing Machine (QTM for short) can be seen as the quantum version of a Turing Machine, usually described by the 7-tuple with a condition of a unitary evolution, where:  is the (finite) set of internal states { , and is usually referred to as the current state};  is the input alphabet, is the finite set of symbols called tape alphabet -a sequence of cells containing symbols (one in each cell)-(i.e., ) and it usually contains at least 1-used to code natural numbers in unitary notation-and , the blank symbol;  is the transition function that allows to move from one state to another, giving the amplitude of each step.The square of this function represents the probability of having that step if a measurement occurs.Furthermore, we have the condition and .are allowed moves of the tape head, respectively Left, None -no allowed head move -and Right;  ( a distinguished member of ) is the initial state;  ( a subset of ) is the set of final states (one final state is sufficient). Here we itemize other relevant features:  A tape is a pair of strings and such that and w R ;  is the head of the tape whenever is the rightmost symbol of ;  A configuration of is a triple in ;  The initial configuration is represented as where is the input;  A final configuration is where and is the output; we assume that if a final configuration is a superposition of more than one, then all of them are in a final state. A QTM-computation [15] is a (finite) set of configurations above which determines a mapping a: such that for each represents the amplitude of the transition of from to .This matrix must be unitary. Consequently, for each configuration and all its next configurations if is the amplitude representing the transition from to a generic configuration , then: where represents the probability of going from to , but all the configurations occur in a parallel way -step by step-until a measurement is effectuated. Note that after the first step, the starting configuration can be also a superposition of configurations.In this case, the next configuration is determined by the transition function as well but weighting each component of with the relative amplitude. We now want to give a quantum version of our algorithm.We start with the definition of the following QTM:  ; (12) Each symbol refers to the previous generic definition of a QTM, and represent respectively the symbols for a found Hamiltonian cycle and a not found one, is the blank symbol and is the number of the nodes of a given graph . Note that with this definition each cell of the tape may contain not only a single symbol and their superposition, but also partial or full paths of the graph and their superposition. Then, we define with the following rules: , where is a function of the given graph and is supposed to be implemented in a way that we can go easily from a node to its neighbors, we have a QTM that represents the quantum version of our algorithm. Let be the string contained at step in the input tape at the left of the cursor, then we have: And finally: In (20) returns only if is a Hamiltonian cycle, otherwise returns .Now a measure on the input tape, precisely in the cell relative to the cursor position, that is the cell from origin position, is done. Function is an oracle, a function that has access to the graph representation in an efficient way.Initially, it starts from node 1 of .It returns a superposition of paths of length 2 ending with 1 neighbor.The cursor moves always right.The oracle function considers all information written in the tape, also because it is summarized at the rightmost position of the written part of the tape-in the last cell exactly.If possible, without revisiting nodes, increments each path of the superposition of next cell node. When it is not possible, writes the symbol on the tape.This symbol is eventually propagated for all remaining steps of the path.At the last step, which is always the step, determines if there is a Hamilton cycle, that is a length path and an edge to node 1 closing the cycle, and writes for that path the symbol .Otherwise writes . The probability of finding as a result, given the previously defined , is greater than zero only if the graph is Hamiltonian.In this case, a read operation on the whole tape would result in a Hamiltonian cycle. Theorem 2. The probability of a hit result in the final measure for the above quantum algorithm is: .(22) xxxx JQC, 2021, vol.xx, no.xx Proof.Indeed, the time of evolution of our algorithm, if we exclude the latest step, the measurement, is exactly steps.But we must consider that all results are processed in the number of repeated trials to have a good probability to measure an result if the graph is Hamiltonian.Also, we must consider that if we measured only the single last cell of the tape, it can have only two symbols, or , because the symbol may be the final symbol of a single or few paths, while the final symbol of a lot of paths processed.The number of paths processed is the same as in our probabilistic algorithm of the previous section.This quantum algorithm reproduces exactly the probabilistic one of the previous section.Indeed, we used the quantum framework to obtain the same functionality and we didn't gain an advantage.With this we demonstrate the Theorem 2. In addition, note that in each of the steps of the processing algorithm we call the function , which we supposed to be linear and however not less than constant as time complexity.In order to make a new measure each trial must run fully and independently.Two corollaries follow. Corollary 2. The complexity of the above quantum algorithm derives from the repeated number of its executions and measure operations.This must be proportional to the inverse of the probability given in Theorem 2. Corollary 3. The complexity of the probabilistic and quantum algorithms of Theorem 1 and Theorem 2 respectively are the same. This follows from what is stated in [22], when quantum computation is not used with all or a large part of its features, often the respective algorithms don't perform computationally faster than probabilistic ones for a similar problem.More in general, we can say that in order to obtain a quantum advantage, it is necessary to rely on purely quantum features rather than trying to replicate a classical version of our problem within a quantum framework. A quantum interference-based algorithm for HCP If we want a better result, we may relax some QTM conditions and admit the following modification to the last rules and , substituting them with the following ones: if head , (23) if head , (24) , (25) where is a random extraction of 0 or 1 both with 50% of probability, and with head a function that reads the rightmost position. Due to if we read the symbol in the state in a right movement, the transition to another state with the symbol has amplitude or both with probability 0.5.So, it is not a standard QTM anymore, but it is a theoretically valid extension of the original QTM, also because it induces again a unitary transformation on QTM configurations.We obtained a probabilistic QTM that respects the unitary constraint and whose associated amplitudes are functions of the status of the tape and of the state of the machine.The presence of negative interference allows us to access more paths and, if the graph is Hamiltonian, the result of measurement is with higher probability.Consequently, the complexity of our algorithm is better than the one from the previous algorithm version.Theorem 3. The time complexity of the algorithm with the new rules and instead of and is: Proof.The results are affected by negative interference with each other, but in a random way.So, the final weight may be seen as the sum of negative and positive weights in a fully random modality.This behaviour can be described as the equivalent of a particle in a Brownian motion: we have as minimum and maximum expected values after steps, each step consisting of an increment or a decrement of one unit of: .( Consequently, we can derive (26) from our definition of in section 3.1. So, we have a quadratic speedup, the same obtained applying to our algorithm a quantum search of Grover type [22][23][24], for example in a similar manner that in [9].Now we explain better the oracle function of our quantum algorithm. Note that, given that i , where is a neighbor of which has not been visited yet, and is the last node visited of the current partial path, we have: (28) where, as in the non-quantum case, is the degree of the node , and is the number of the already visited neighbors of in the current path.If contains a partial path, and the last node of it has more than one not visited nodes, then is incremented in a superposition way of one node in the next cell. If all the neighbors of have been visited, and if the length of the path is less then (number of G nodes) or there is no edge , the next symbol added to the path is .Otherwise, if the length of the current path is , and such an edge exists, then the symbol is added to the right of the path.At the end, all superposition paths have length and terminate with or and the necessary steps to evaluate the graph are exactly in all cases.The final states in each superposition are reached at the step.If the graph is Hamiltonian, the last symbol of the tape is a superposition between and .Otherwise, if the examined graph is not Hamiltonian the last symbol will be with no superposition.On the left-hand side of Figure 1, we can see a graph with two partial paths in superposition.We suppose , , , , , , , , belong to , which are the edges of the graph G.If we suppose to be at the fourth walking edge of our algorithm, some superposition path has been generated.Two paths in superstition are and , but for this second one, the next symbol associated to it will be deterministically NH, because from we can go only to node , an already visited node.Two other paths in superposition are and and both have the possibility to go to a not visited neighbor, respectively and for example. On the right-hand side of Figure 1, we consider the two possible partial paths: and the . For the first one, the next choice can be ,or , each with amplitude ; while for the second path, the choice for the next node can be or , each with amplitude . All these paths are possible, then they are in superposition.The chosen one will be overwritten on the tape, each weighted with their appropriate amplitude.At each phase, the path length of all superposition paths will be the same.If a path terminates before visiting all possible nodes, from the rules derives that the terminating positions will be filled with the symbol. The oracle function determines next node together with the amplitude of each choice too.q-our d-max q-our d-20% q-our d-log q-our d-log10 In Figure 2, we show an overview of all the algorithms, classic, probabilistic and quantum ones, shown schematically in Table 1.On the y-axis we represent the expected execution time steps in logarithmic scale.On the x-axis we represent the number of nodes.The first thing that catches the eye is that the classical algorithms can all be over-performed in terms of time complexity by the quantum ones.The brute force classical algorithm (b-f classic) is better than our classical algorithm only for the maximum degree graph class; while obviously the brute force quantum algorithm (b-f quantum) is always better than b-f classic labelled algorithm, this latter being the brute force algorithm that analyzes classically all the possible permutations of the G vertices.B-f quantum is a quantum search-based algorithm that works on all possible permutations; it can be over-performed by our probabilistic algorithm applied to graphs whose degree is limited by the base-ten logarithm of the number of nodes (our d-log10) and obviously by our quantum version with the same graphs class (q-our d-log10). Following the same criteria, with the labels our d-20% and our d-log, we represent time complexity of our algorithm with parameter d, the maximum degree of the graph nodes, given as limited respectively to the and to the logarithm of nodes number.The respective quantum versions of these algorithms are given by q-our d-20% and q-our d-log.Note that all these algorithms have an exponential time complexity and that only comparing them while they operate on the same classes of graph is correct. xxxx In Figure 3, we compare our algorithms with best classical and best quantum known algorithms, respectively indicated with labels best cl and best q.These are over-performed only by the quantum version of our algorithm.Indeed, Figure 2 and Figure 3 show that only our quantum algorithm applied to base-ten logarithm limited degree graph class, labelled q-our d-log10, among those examined, is better than best classical and quantum known algorithms. Moreover, if we limit to a few hundreds, our quantum algorithm can achieve the same time complexity of q-our d-log10 for every class of graph following this relation: , (29) while our algorithm performs better than best known algorithms also for that tends to infinity if which can be written as: . (31) Conclusions We have designed and analyzed a probabilistic algorithm for a Hamiltonian Cycle Problem (HCP), even if without obtaining for the general case a relevant improvement.We also designed two quantum versions of our algorithm, respectively without and with negative interference added.We compared them, the latter in particular, with a classical and a quantum algorithm.In this case, for some classes of graphs, for example, the upper bounded degree graphs, with the assumption of the validity of (31), our algorithm performs better.It outperforms not only the quantum algorithm based on Grover searches on all permutations, but also the best classical and quantum already known algorithms for HCP. Anyway, we can obtain a quadratic speedup with our quantum algorithm with respect to the classic version of it, only if we use also quantum negative interference and not only quantum superposition. An interesting aspect arising from this work is the possibility of comparing probabilistic with quantum computational models.Indeed, this work can also be seen as a hybrid approach combining quantum and probabilistic elements.Also, it is worth deepening our knowledge of our version of a QTM, as it is a valid tool for algorithm design.Further studies should be done to compare our algorithm with that written specifically for graphs of bounded average degree, for example those in [25][26]. Another theme that emerges from this work is the exploration of other NP problems and determining under what conditions, with constraints on the original problem or extensions of the quantum model, they become tractable. Figure 1 : Figure 1: An example of superposition of two visited paths in a part of the graph G and on the tape, and of the next node choice.Three possible choices are added to the first path and four to the second.The resulting superposition of seven paths is the result of this choice. Figure 2 : Figure 2: Comparison between our probabilistic algorithm and different quantum and brute force algorithms * Obviously deterministic algorithms are the only ones that can solve the HCP with 100% probability. Figure 3 : Figure 3: Comparison between our probabilistic quantum algorithm and the best-known classic and quantum ones Table 1 : Final short summary of different algorithms (or the same ones applied to different classes of graph) compared in order of time complexity, for n that tends to infinity
2023-11-21T06:42:27.886Z
2023-11-18T00:00:00.000
{ "year": 2023, "sha1": "513dfc5b259b3cf97bfb8ba3046d4fea5b2e05c0", "oa_license": "CCBY", "oa_url": "https://file.techscience.com/files/jqc/2023/TSP_JQC-5-3/TSP_JQC_44786/TSP_JQC_44786.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "513dfc5b259b3cf97bfb8ba3046d4fea5b2e05c0", "s2fieldsofstudy": [ "Computer Science", "Mathematics", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
158137357
pes2o/s2orc
v3-fos-license
Social Co-Governance for Food Safety Risks We review relevant literature to propose the connotation and operation logic of food safety co-governance, systematically constitute by roles, functions, as well as the boundaries of public government, enterprise, and social forces. The major thesis is that social co-governance is a kind of societal-wide innovation (i.e., social innovation) that integrates diverse resources and efforts from multiple stakeholders for better and sustainable development of an economy’s food institution and system. We then put forward a prospect of the future research on food safety risk co-governance. Theoretical, practical, and policy implications are discussed. Each year approximately 18 million people die from consumption of unhygienic food and water [6]. In terms of economic theory, the root cause of food safety problems is information asymmetry between food consumers and manufacturers. Market failure that is caused by information asymmetry needs to be addressed by government administrative intervention [7]. Therefore, food safety regulations in most developed countries focus on regulating food production processes or safety levels by mandatory standards. However, the bovine spongiform encephalopathy crisis in 1996, which originated in the United Kingdom and caused global panic, and other major food safety incidents that followed, have shaken and seriously dampened public confidence in the government's ability to manage food safety risks [8,9]. Under the circumstances, the governments urgently needed to find more effective approaches to food safety risk governance in response to public expectations and media pressure [10]. Since, the governments of developed countries began to reform the structure of food safety regulations [11][12][13] Among the newly proposed thinking, co-governance has emerged as a more transparent and effective way by encouraging social participation for the collective pursuit of food safety [14][15][16]. Extensive social governance practices in developed countries have demonstrated that outsourcing some public governance functions can solve tight government budgets and limited governance resources [17,18]. With the rapid development of food production technology and the increasingly international supply chain, non-government actors, such as companies and industry associations, have unique advantages in food production technology and management [19]. They can supplement basis, Sinclair argues that manifestations of co-governance are bound to vary greatly as government regulation and social self-governance can be combined in diverse ways [33]. Actors In the early 1990s, the Dutch government believed that the establishment of a clear coordination and collaboration relationship between the government and social actors, including citizens and social organizations, within the legal framework was very important for improving the quality of legislation. To this end, the subsidiary principle was clearly proposed in relevant documents. This is an early form of co-governance in government documents. In 2000, the British government incorporated co-governance in the Communications Act 2003, and viewed it as a process of ensuring that an effective and acceptable plan is developed by the active and interactive participation of all actors. This actually considers co-governance as a form of cooperation between government agencies and companies in social governance [34]. In this case, governance responsibilities are shared by the government and companies as agreed in the cooperation [35]. Furthermore, Eijlander argues from a legal point of view that co-governance is a hybrid approach to solve specific problems through coordination and cooperation between government and non-government actors, which results in agreements, conventions, and even laws [36]. Elodie and Julie suggest that co-governance is the process by which government and non-governmental organizations, including the general public, as well as other stakeholders, jointly formulate or participate in the formulation of laws and governance agreements or rules [37]. In view of the importance of preventing food safety risks, researchers have extended the concept of co-governance to food safety risk governance. Fearne and Martinez defines food safety risk co-governance as the process by which the government and companies cooperate to construct an effective food safety system to ensure better food safety in production and protect consumers from risks, such as foodborne diseases, under the premise that all stakeholders in the food supply chain (from production to consumption) can benefit from improved governance efficiency [38]. Marian suggests that food safety risk co-governance is the process by which the government and social organizations coordinate and cooperate in setting food safety standards, process implementation, enforcement, and monitoring in order to provide higher quality and safer food at a lower governance cost [21]. Standards setting refers to development of references of quality level when defining acceptable food quality and safety. Process implementation simply refers to putting those stands into practices. Enforcement refers to ensuring the correct implementation through law and regulation. Monitoring refers to a constant review of the situation in accordance to the implementation of food safety and quality standards. Based on the theoretical research results and experiences of developed countries, it is proposed in this paper that food safety risk co-governance is the process by which all participating actors cooperate and work together based on their respective functions to regulate food safety at a lower cost by the combined and synergetic use of multiple instruments, such as government regulation, market incentives, technical regulation, social supervision, and information dissemination, under the framework of laws and regulations and under the premise of balancing the interests and responsibilities of the diverse actors, in order to ensure higher level of food safety and achieve maximum social welfare. In the nature of the proposed model of social co-governance, it ideally has an assumption of 'balance' between multiple parties. Without balance (i.e., a situation of dis-equilibrium of the power relations among government, company, and society), the "co-" governance could not be possible. Thus, it is difficulty to offer the negative outcome of the social co-governance model per se (emphasis added). Note that the concept of co-governance implicates that the rights and freedom of government, enterprises, and social forces are equal, as they intend to contribute to the improvement of food safety. Such rights and freedom are not directly reflected by the legislation power defined by law. On the contrary, due to heterogeneous but complementary capabilities that are possessed by these different parties, government, enterprises, and social forces could all contribute differently but effectively to food safety for the whole society. Overall, the connotation of food safety risk co-governance is visually depicted in Figure 1. Operational Logic of Food Safety Risk Co-Governance Since the 1990s, public governance theory has developed rapidly and it has become a popular topic in social science research. When compared with traditional social management theories, public governance theory overcomes the simplistic dichotomy between government and market. It recognizes the objective existence of "government failure"and "market failure", and even their coexistence in some areas, and it suggests that the third sector, also known as the "third hand", has to be introduced into the governance of public affairs. Moreover, the theory argues that the government, the market, and the third sector should be in equal positions and form a coordinated and effective network in order to more effectively distribute social benefits and ensure the maximization of social welfare. Based on public governance theory [39,40], food safety is characterized by inseparable utility, nonrivalrous consumption, and nonexcludable benefits; therefore, food safety possesses the characteristics of public goods. The occurrence of food quality and safety incidents can cause public health damage, have a significant impact on the healthy development of the food industry, and even pose a huge threat to social and political stability; therefore, food safety risks are a public crisis [41,42]. Hence, it is the responsibility of the government to prevent food safety risks and ensure food safety. However, food is also an ordinary good. The demand for food production and the supply of the entire society should be met through market mechanisms. Nevertheless, food possesses search, experience, and credence attributes. As one of the attributes, credence (e.g., pesticide residues in vegetables or oil used in hot pots), cannot be ascertained by consumers until sometime after purchase or can never be ascertained. This, however, is often clear to producers [43]. As market failure is caused by food safety information asymmetry between producers and consumers [7], government intervention is therefore required. Traditional theories and practices of food safety risk governance are mainly based on the principle of "improving government regulation".In fact, when considering the changes in the food safety risk governance system, developed countries generally initially adopted a government-led approach. However, in the context of the increasing specialization in food production and the gradual internationalization of food trade, the government has its own limitations. In other words, the government fails in food safety risk governance [44]. Due to its complexity, diversity, and technical and social nature, food safety risk governance cannot rely solely on the government. Therefore, the focus should not only be on government governance and corporate self-governance, but also on the Operational Logic of Food Safety Risk Co-Governance Since the 1990s, public governance theory has developed rapidly and it has become a popular topic in social science research. When compared with traditional social management theories, public governance theory overcomes the simplistic dichotomy between government and market. It recognizes the objective existence of "government failure"and "market failure", and even their coexistence in some areas, and it suggests that the third sector, also known as the "third hand", has to be introduced into the governance of public affairs. Moreover, the theory argues that the government, the market, and the third sector should be in equal positions and form a coordinated and effective network in order to more effectively distribute social benefits and ensure the maximization of social welfare. Based on public governance theory [39,40], food safety is characterized by inseparable utility, nonrivalrous consumption, and nonexcludable benefits; therefore, food safety possesses the characteristics of public goods. The occurrence of food quality and safety incidents can cause public health damage, have a significant impact on the healthy development of the food industry, and even pose a huge threat to social and political stability; therefore, food safety risks are a public crisis [41,42]. Hence, it is the responsibility of the government to prevent food safety risks and ensure food safety. However, food is also an ordinary good. The demand for food production and the supply of the entire society should be met through market mechanisms. Nevertheless, food possesses search, experience, and credence attributes. As one of the attributes, credence (e.g., pesticide residues in vegetables or oil used in hot pots), cannot be ascertained by consumers until sometime after purchase or can never be ascertained. This, however, is often clear to producers [43]. As market failure is caused by food safety information asymmetry between producers and consumers [7], government intervention is therefore required. Traditional theories and practices of food safety risk governance are mainly based on the principle of "improving government regulation".In fact, when considering the changes in the food safety risk governance system, developed countries generally initially adopted a government-led approach. However, in the context of the increasing specialization in food production and the gradual internationalization of food trade, the government has its own limitations. In other words, the government fails in food safety risk governance [44]. Due to its complexity, diversity, and technical and social nature, food safety risk governance cannot rely solely on the government. Therefore, the focus should not only be on government governance and corporate self-governance, but also on the engagement of social actors, such as social organizations and the general public, thereby achieving effective co-governance by the entire society [45,46]. As a new form of governance, the emergence of co-governance has completely changed the understanding of ex-post food safety risk governance and has compensated for the shortcomings of the traditional government regulation approach [47]. Based on the research results of May and Burby [48], Elodie and Julie [37] developed a framework for analyzing co-governance in enforcing food safety, as shown in Figure 2. engagement of social actors, such as social organizations and the general public, thereby achieving effective co-governance by the entire society [45,46]. As a new form of governance, the emergence of co-governance has completely changed the understanding of ex-post food safety risk governance and has compensated for the shortcomings of the traditional government regulation approach [47]. Based on the research results of May and Burby [48], Elodie and Julie [37] developed a framework for analyzing co-governance in enforcing food safety, as shown in Figure 2. From the perspective of both philosophy andstrategy, co-governance is more proactive and creative in managing food safety risks. For example, traditional regulation by the government alone imposes severe punishment on corporate offenders identified by random inspections. In contrast, co-governance brings together various actors to prevent corporate violations through a series of measures, such as education and training, and it encourages corporate compliance with the law through targeted inspections and market incentives. Therefore, co-governance enables more actors to participate in food safety risk governance, thereby improving governance flexibility, extending policy applicability, and reducing public costs [29,31]. The practices in developed countries have proved that co-governance brings about significant changes in food safety risk governance. Based on the literature, these changes can be summarized into three categories. A New Combination and Qualitative Improvement of Regulatory Capacity When compared with traditional governance, co-governance engages nongovernment actors, such as social organizations, companies, the media, and the public, in food safety risk governance through effective mechanisms. The result is that a much larger number of actors are able to play a role in governance [21]. Social actors play an important role in providing higher quality and safer food in a way that complements government governance [21]. Food industry organizations and manufacturers are generally more aware of food quality, while governments can create reputation-based incentives to monitor food quality. This indicates strong complementarity between government governance and corporate and social governance [49]. Co-governance allows all actors to give full play to their competence [50] and it is thus more effective than traditional governance [11,36]. For example, in the framework of the EU food hygiene regulations, From the perspective of both philosophy andstrategy, co-governance is more proactive and creative in managing food safety risks. For example, traditional regulation by the government alone imposes severe punishment on corporate offenders identified by random inspections. In contrast, co-governance brings together various actors to prevent corporate violations through a series of measures, such as education and training, and it encourages corporate compliance with the law through targeted inspections and market incentives. Therefore, co-governance enables more actors to participate in food safety risk governance, thereby improving governance flexibility, extending policy applicability, and reducing public costs [29,31]. The practices in developed countries have proved that co-governance brings about significant changes in food safety risk governance. Based on the literature, these changes can be summarized into three categories. A New Combination and Qualitative Improvement of Regulatory Capacity When compared with traditional governance, co-governance engages nongovernment actors, such as social organizations, companies, the media, and the public, in food safety risk governance through effective mechanisms. The result is that a much larger number of actors are able to play a role in governance [21]. Social actors play an important role in providing higher quality and safer food in a way that complements government governance [21]. Food industry organizations and manufacturers are generally more aware of food quality, while governments can create reputation-based incentives to monitor food quality. This indicates strong complementarity between government governance and corporate and social governance [49]. Co-governance allows all actors to give full play to their competence [50] and it is thus more effective than traditional governance [11,36]. For example, in the framework of the EU food hygiene regulations, governments, companies, social organizations, and citizens have actively participated in the regulation of food safety and played an important role in ensuring food safety [51]. New Improvements in Qualityand the Practicality of Legal Standards Co-governance improves the quality and practicality of legal standards in food safety regulation. On the one hand, expertise in food quality and safety is the basis for the development of appropriate laws [33]. The engagement of nongovernment actors, such as companies and industry organizations, can help in the development of more wellconsidered food safety standards and regulations, owing to their unique advantages in this regard [19]. On the other hand, the government can directly upgrade the nongovernmental standards that were developed by companies or industry organizations to national legal standards [31]. Since these standards are based on expertise in the food industry, they can be applied relatively perfectly to the food industry and are considered to be the most adequate and effective [52,53]. Moreover, food companies feel a sense of belonging and ownership [54] and are more likely to understand and comply with new legal standards developed with their participation [55]. In the EU, legal food safety standards have combined government standards with industry and corporate standards [16,56,57]. New Changes in Governance Efficiency and Cost Co-governance can reduce the cost and improve the efficiency of food safety regulation. The participation of multiple actors helps to achieve decisions that are in line with the actual corporate or industry situation, thereby improving the practicality of the decisions that are made and reducing the burden on all actors [58]. Moreover, co-governance can distinguish between high-risk and low-risk companies, so the government can carry out targeted inspections. This will lead to greater pressure on high-risk companies and a reduced burden on law-abiding companies [59]. In the United Kingdom, farms participating in farm insurance are inspected at a rate of 2%, while nonparticipating farms are inspected at a rate of 25%. This has led to a cost reduction of New Improvements in Qualityand the Practicality of Legal Standards Co-governance improves the quality and practicality of legal standards in food regulation. On the one hand, expertise in food quality and safety is the basis for the developm appropriate laws [33]. The engagement of nongovernment actors, such as companies and ind organizations, can help in the development of more wellconsidered food safety standard regulations, owing to their unique advantages in this regard [19]. On the other hand government can directly upgrade the nongovernmental standards that were develope companies or industry organizations to national legal standards [31]. Since these standard based on expertise in the food industry, they can be applied relatively perfectly to the food ind and are considered to be the most adequate and effective [52,53]. Moreover, food companies sense of belonging and ownership [54] and are more likely to understand and comply with legal standards developed with their participation [55]. In the EU, legal food safety standards combined government standards with industry and corporate standards [16,56,57]. New Changes in Governance Efficiency and Cost Co-governance can reduce the cost and improve the efficiency of food safety regulation participation of multiple actors helps to achieve decisions that are in line with the actual corp or industry situation, thereby improving the practicality of the decisions that are mad reducing the burden on all actors [58]. Moreover, co-governance can distinguish between hig and low-risk companies, so the government can carry out targeted inspections. This will le greater pressure on high-risk companies and a reduced burden on law-abiding companies [5 the United Kingdom, farms participating in farm insurance are inspected at a rate of 2%, nonparticipating farms are inspected at a rate of 25%. This has led to a cost reduction of ₤5 per year for participating farms and ₤2 million for the local government agencies [60]. Co-governance mode for food safety is beneficial for the collaboration among govern enterprises, and social forces, with their heterogeneous but complementary capabilities being utilized. As to the capability of distinguishing between high-versus and low-risk comp China's can also serve as a good example. With the big data accumulated by China's Cu regarding the rejected food by other countries, clear identification of high-risk companies highly rejected items as their core products could be made. Such big data is shared among the major actors of the co-governance system, in order to allow government's monitoring, enterp self-reflection, and social forces' warning functions. Even the consumers could adjust consumption behaviors based on such data sources [61]. In sum, as compared to the govern dominating model of food safety governance with single source of information, the risk ident capability is largely improved (in co-governance mode) by transparent information flows a government, enterprises, social forces, and even consumers. Therefore, when compared with traditional government regulation, co-governance is flexible and efficient in regulating food safety. Under the operational logic of co-governance safety risk governance has transformed from a traditional punishment-oriented approach modern prevention-oriented approach [37]. Role of Government in Food Safety Risk Co-Governance The traditional theoretical research on food safety risk governance is mainly based o principle of "improving government regulation", with an emphasis on severe punishment. 1990s, under public pressure from frequent flagrant food safety incidents, developed cou strengthened their food safety risk governance by ex-ante legislation and ex post direct interve based on the principle of severe punishments [11]. However, due to the complexity, div technicality, and sociality of food safety risk governance, it was later realized that there ements in Qualityand the Practicality of Legal Standards ance improves the quality and practicality of legal standards in food safety the one hand, expertise in food quality and safety is the basis for the development of s [33]. The engagement of nongovernment actors, such as companies and industry can help in the development of more wellconsidered food safety standards and ing to their unique advantages in this regard [19]. On the other hand, the an directly upgrade the nongovernmental standards that were developed by industry organizations to national legal standards [31]. Since these standards are tise in the food industry, they can be applied relatively perfectly to the food industry ered to be the most adequate and effective [52,53]. Moreover, food companies feel a ing and ownership [54] and are more likely to understand and comply with new developed with their participation [55]. In the EU, legal food safety standards have rnment standards with industry and corporate standards [16,56,57]. es in Governance Efficiency and Cost ance can reduce the cost and improve the efficiency of food safety regulation. The f multiple actors helps to achieve decisions that are in line with the actual corporate tuation, thereby improving the practicality of the decisions that are made and rden on all actors [58]. Moreover, co-governance can distinguish between high-risk ompanies, so the government can carry out targeted inspections. This will lead to e on high-risk companies and a reduced burden on law-abiding companies [59]. In gdom, farms participating in farm insurance are inspected at a rate of 2%, while g farms are inspected at a rate of 25%. This has led to a cost reduction of ₤571,000 rticipating farms and ₤2 million for the local government agencies [60]. ance mode for food safety is beneficial for the collaboration among government, d social forces, with their heterogeneous but complementary capabilities being fully the capability of distinguishing between high-versus and low-risk companies, so serve as a good example. With the big data accumulated by China's Customs rejected food by other countries, clear identification of high-risk companies with items as their core products could be made. Such big data is shared among the three the co-governance system, in order to allow government's monitoring, enterprises' and social forces' warning functions. Even the consumers could adjust their ehaviors based on such data sources [61]. In sum, as compared to the government del of food safety governance with single source of information, the risk identifying rgely improved (in co-governance mode) by transparent information flows among terprises, social forces, and even consumers. when compared with traditional government regulation, co-governance is more icient in regulating food safety. Under the operational logic of co-governance, food ernance has transformed from a traditional punishment-oriented approach to a tion-oriented approach [37]. rnment in Food Safety Risk Co-Governance ional theoretical research on food safety risk governance is mainly based on the proving government regulation", with an emphasis on severe punishment. In the ublic pressure from frequent flagrant food safety incidents, developed countries eir food safety risk governance by ex-ante legislation and ex post direct intervention rinciple of severe punishments [11]. However, due to the complexity, diversity, d sociality of food safety risk governance, it was later realized that there were 2 million for the local government agencies [60]. Co-governance mode for food safety is beneficial for the collaboration among government, enterprises, and social forces, with their heterogeneous but complementary capabilities being fully utilized. As to the capability of distinguishing between high-versus and low-risk companies, China's can also serve as a good example. With the big data accumulated by China's Customs regarding the rejected food by other countries, clear identification of high-risk companies with highly rejected items as their core products could be made. Such big data is shared among the three major actors of the co-governance system, in order to allow government's monitoring, enterprises' self-reflection, and social forces' warning functions. Even the consumers could adjust their consumption behaviors based on such data sources [61]. In sum, as compared to the government dominating model of food safety governance with single source of information, the risk identifying capability is largely improved (in co-governance mode) by transparent information flows among government, enterprises, social forces, and even consumers. Therefore, when compared with traditional government regulation, co-governance is more flexible and efficient in regulating food safety. Under the operational logic of co-governance, food safety risk governance has transformed from a traditional punishment-oriented approach to a modern prevention-oriented approach [37]. Role of Government in Food Safety Risk Co-Governance The traditional theoretical research on food safety risk governance is mainly based on the principle of "improving government regulation", with an emphasis on severe punishment. In the 1990s, under public pressure from frequent flagrant food safety incidents, developed countries strengthened their food safety risk governance by ex-ante legislation and ex post direct intervention based on the principle of severe punishments [11]. However, due to the complexity, diversity, technicality, and sociality of food safety risk governance, it was later realized that there were substantial problems in relying solely on administrative departments to enforce food safety. Colin et al. argue that the fragmented organization of regulatory agencies leads to significant dissipation and the weakening of the regulatory capacity and even administrative corruption, such as rent seeking and rent setting [62]. Despite all of these problems with traditional regulation, the government plays an irreplaceable role in food safety risk co-governance [63]. In fact, it is of the utmost importance for the government to identify its role and scope in co-governance. David et al. propose that the government needs to steer rather than row and to empower rather than serve [64]. In contrast, Janet and Robert argue that the government needs to serve rather than steer in order to meet the individual needs of citizens as much as possible without making decisions for them [65]. The Better Regulation Task Force suggests that, for any given food safety problem, the level of government intervention may range from doing nothing-leaving the market to find the requisite solution-to direct regulation [66]. Garcia further divides government governance into six stages in terms of intervention level: no government intervention, corporate self-governance, co-governance, information and education, market incentives, and direct government command and control (Table 1) [21]. At the third stage, co-governance, the government has a specific function and role. The basic functions of the government in food safety risk co-governance are further described as follows. Creating an Institutional Environment That Guarantees Market and Social Order In the framework of co-governance, the most important responsibility of the government as a leader is to create an institutional environment that guarantees market and social order [67]. It is the responsibility of the government to ensure that companies produce food according to legal standards [37]. Moreover, the government has the responsibility to establish an effective mechanism for punishing violating companies under the legal framework, which is conducive to building consumer confidence in food safety risk governance [61]. However, government regulation and punishment should be set at a level that encourages companies to voluntarily implement quality assurance systems, such as Hazard Analysis and Critical Control Point (HACCP), without undermining their enthusiasm and decision-making flexibility in autonomous production. This is a major challenge for governments [31]. Establishing a Compact and Flexible Regulatory Structure The effectiveness of food safety risk governance depends on the regulatory structure. A decentralized, inflexible structure can severely limit the ability of actors to cope with changing food safety risks [68,69]. Therefore, the government needs to construct a compact and flexible co-governance structure by employing different combinations of policy instruments that are based on the actual situation [70,71]. The lack of trust between actors in the food supply chain can seriously impede cooperation [21,38]. Therefore, information exchange and legislation should form an essential component of the regulatory structure in order to address the distrust in the regulatory structure through information disclosure and exchange [72]. Building a Collaborative Partnership with Companies and Social Actors As a key actor in public governance, the government should exert its strong influence on bringing together companies and social actors in food safety risk governance by continuously promoting cooperation with companies, social organizations, and individuals, etc. [36]. The government should build an equal, coordinated, orderly, friendly, and collaborative partnership with companies, consumers, social organizations, and other actors with an open and inclusive attitude in the system of food safety risk co-governance. On the other hand, efforts should also be made to reduce buck-passing and disinterest among government agencies to improve the regulatory capacity [73]. Moreover, government information should be disclosed in an open and transparent manner to increase the trust of other actors in the government in order to promote cooperation with companies and social actors and to create a harmonious and orderly co-governance environment [74]. In addition, providing timely information, education, and training for food companies can improve the relationship between the government and companies [32,75]. Role of Companies in Food SAFETY risk Co-Governance The behaviors of food companies, especially producers, directly or indirectly determine food quality and safety. Co-governance requires food companies to assume more responsibility for food safety [37]. However, the ultimate goal of companies is to obtain economic benefits. Food producers and traders decide whether to comply with food safety regulations based on the costs and benefits. The responses can range from full compliance to noncompliance [76]. Food companies also assess the costs and benefits of their internal (resource) and external (reputation and punishment) incentives and then determine the appropriate safeguards to achieve a certain food safety level based on budget limits, marketing strategies, and market structure [77]. Therefore, it is necessary to use market mechanisms to induce companies to fulfill their responsibilities as actors in food safety risk co-governance. Enhancing Corporate Self-Governance For companies, high food quality not only means no punishment but also a good reputation and consequent benefits. Enhancing corporate self-governance is an important part of ensuring food quality [38]. Corporate self-governance involves risk analysis and control. In this regard, HACCP is one of the most widely recognized approaches to preventing food safety risks and it has been implemented in many food companies within in the EU and the US [78]. While food quality and sales incentives can encourage companies to implement HACCP, its use is limited by company size [79]. Due to the lack of funds and technology, mediumand small companies, which account for the vast majority of food companies, find it difficult to implement HACCP and similar quality control systems and the need to implement self-regulation based on their own specific circumstances [32,80]. Ensuring Food Quality through Contractual Mechanisms As food companies in developed countries often achieve quality and safety in food production and transactions through vertical contract incentives, the midstream and downstream manufacturers play a particularly important role in the food supply chain. To better control product quality, contract incentives will become increasingly common among farmers, processors, transporters, and retailers in the food supply chain. When the characteristics of the product being sold are easily identifiable, contract terms will focus on financial incentives; otherwise, the focus will be on defining specific inputs and behavioral requirements [81]. Downstream companies can use high-precision inspection systems to ensure the quality and safety of raw materials and half-done products from upstream companies and obtain compensation from upstream companies through contractual mechanisms in the case of food safety and quality problems. In this way, upstream companies are motivated to take measures to ensure the safety and quality of the food produced [82]. Therefore, participants in the food supply chain can control the quality of the final product for consumers through contract terms [83]. Communicating Safety Information to Consumers Food companies can communicate safety information to consumers using instruments, such as certification, labeling and traceability systems, to solve food safety information asymmetry. In terms of certification and labeling, certification by local private organizations or at the farm level, as well as quality and safety standards that are established by retailers, has been used in addition to the certification standards set by international organizations and governments in developed countries [84]. An example, EUREP GAP (Good Agricultural Practice) is a standard developed by the Euro-Retailer Produce Working Group (EUREP) in compliance with the Anti-Trust Act to provide integrated farm assurance, integrated aquaculture assurance, and technical specifications for tea, flowers, and coffee, etc. [85]. These technical specifications are reflected in many ways, such as equipment standards, production processes, packaging processes, and quality management, and at times are even more stringent than the legal norms [86]. Traceability systems, on the one hand, allow for the differentiation of food with a safety credence attribute to ensure food safety for consumers, and, on the other hand, enable companies to reduce the costs in the production and marketing of risky food to obtain a net income [87]. Role of Social Actors in Food Safety Risk Co-Governance Social actors are an important part of food safety risk co-governance. They are a powerful supplement to government governance and corporate self-governance and determine the success or failure of public policies [88][89][90]. Social actors are defined as basic units that can participate in and act on social development. As the "third sector", which is relatively independent of the government and market, social actors are mainly composed of citizens and various social organizations [91,92]. Social organizations mainly include associations, clubs, health care organizations, educational institutions, social service agencies, advocacy groups, foundations, and self-help groups with membership requirements [93]. As the state-society and public-private link, social organizations tend to establish equal and mutually beneficial cooperation in accordance with the interests of the majority of the public. Such efforts can reduce the uncertainty of regulatory policies and effectively compensate for government and market failures [94]. On the one hand, social organizations can supervise government activities and force the government to correct misconduct, thus compensating for government failure [95]. On the other hand, in the case of contract failure in the market, nonprofit social organizations can effectively restrict the opportunism of producers in order to remedy market failure, thereby meeting public demand for social public goods [96]. Social organizations in the US and EU often participate in food safety regulation through organized demonstrations, protests, publicity, and boycotts [97]. Individuals are the best judges of their own actions [98]. Therefore, every citizen is the best regulator of food safety. Citizens can participate in food safety regulation anytime, anywhere, and in various ways, such as through the Internet, which is a convenient and easily accessible means [99]. However, it is difficult even for scientists in the field of food safety to fully understand food safety due to its vast content. Therefore, it is difficult for the public, which has relatively limited knowledge of food safety, to effectively participate in food safety risk governance. Nevertheless, improving the transparency and traceability of food safety systems can significantly enhance consumers' regulatory capability [100]. For example, because of concerns about the safety of genetically modified food, citizens have called for and required adequate information disclosure by the government and companies in order to protect their rights [101]. Summary and Future Research In summary, the connotation and operational logic of food safety risk co-governance, as well as the role and scope of each actor, have been extensively described in the literature. This has important implications for further theoretical research and the practice of food safety risk co-governance. In particular, the relationship between the government and food companies has been transformed from the traditional unequal regulator-regulatee relationship to an equal collaborative interactive relationship with particular emphasis on corporate self-governance. However, experiences from other countries need to be adapted due to institutional differences. The current definition and practice of food safety risk co-governance in every country is based on its own circumstances. For example, in the vertical regulatory system that was implemented by the US federal government, national public goods are mainly supplied by the central government, with an emphasis on the self-regulation of food producers. In other countries, in contrast, local governments may take overall responsibility for food safety regulation based on the principle of reducing social risks. Therefore, to add dedicated knowledge to the literature, future studies are encouraged to conduct context-specific investigations that help explicate special institutional, economic, and societal characteristics that might make the practices of social co-governance different from that of other countries. To take China as an instance, its food safety situation has attracting attentions, since it is one that with the highest amount of population with huge needs in food consumption. Although China's food safety risk governance standards have been improved rapidly in recent years, it still faces problems and has room for improvement. Therefore, promoting empirical research on the social co-governance of food safety risks in China has positive and practical significance. Future studies may follow these suggested three directions. First, China's food safety governance is based on the responsibility system taken heavily by local governments, while non-government organizations have played relative insignificant roles. It might be important to investigate on how local government could motivate and leverage the capabilities of non-governance organizations to benefit on co-governance. Second, there is a large number of micro-enterprises (i.e., owning less than ten workers) in China's food industries. It would be a great challenge to encourage self-regulation for these micro-enterprises. Third, it is also important to research on the context-specific construction of a food safety co-governance system of China's own, additional to referencing western countries' practices. Author Contributions: L.W. led the project, conceptualized and write for the original draft of the paper; Y.L. was a major co-writer for the first draft. X.C. sorted and logically analyzed the literature; F.-S.T. reviewed and validated the manuscript and was in charge of the revision and resubmission. Conflicts of Interest: The authors declare no conflict of interest.
2019-05-20T13:06:31.901Z
2018-11-17T00:00:00.000
{ "year": 2018, "sha1": "c34e511936d704c5423cf4545cc43908d8abd545", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/10/11/4246/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0d9d2b7dac4b67c44a52a0b6bb7783820b923348", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
227126765
pes2o/s2orc
v3-fos-license
Major Pollen Allergen Components and CCD Detection in Bermuda Grass Sensitized Patients in Guangzhou, China Objective Bermuda grass pollen is a common inhaled allergen. The aim of this study was to investigate the molecular sensitization patterns to major pollen allergens (Bermuda grass, Mugwort and Timothy grass) and cross-reactive carbohydrate determinants (CCD) in Bermuda grass sensitized patients in southern China. Methods Serum specific IgE (sIgE) levels of Bermuda grass allergen components (Cyn d 1 and Cyn d 12), Timothy grass allergen components (Phl p 1, Phl p 4, Phl p 5, Phl p 7 and Phl p 12), Mugwort allergen components (Art v 1, Art v 3 and Art v 4) and CCD were detected in 78 patients sensitized to Bermuda grass via EUROBlotMaster system. Results Compared with CCD-positive patients, those with negative CCD results had significant higher positive rates of Cyn d 1 (47.8% vs 14.5%), Phl p 1 (26.1% vs 7.3%), Phl p 12 (21.7% vs 3.6%) and Art v 4 (26.1% vs 3.6%) (all p < 0.05). Patients <18 years old had the highest positive rate of Cyn d 1 (40.7%). Additionally, rhinitis patients had the highest positive rate of Cyn d 1 (60.0%), and all patients with Cyn d 12 sensitization (17.2%) were asthmatic patients. Optimal scale analysis showed that Phl p 1 and Cyn d 1 were closely related (Cronbach’s alpha = 85.1%). Conclusion The highest positive rate of pollen allergen components was Cyn d 1 in Bermuda grass sensitized patients in southern China. Most patients were sensitized to CCD alone, and CCD may have less interference in the detection of Cyn d 1, Art v 4, Phl p 1 and Phl p 12. The sensitization patterns of pollen allergen components varied in different ages and diseases, and the diagnostic strategy of pollen allergen needs to be considered in the future. Introduction Bermuda grass, a major inhaled allergen in Asia and Europe, release abundant pollen from late spring to early summer, causing cough, wheezing or rhinorrhea and reducing the quality of life in asthma/rhinitis patients. 1,2 Recent research from Argentina found that 48.1% of patients with seasonal allergic rhinitis were sensitized to Bermuda grass pollen, while in Thailand, the positive rate of Bermuda grass was 21.1% in patients with allergic diseases by skin prick test. 3,4 Our previous research found that more than 60% of Bermuda grass sensitized rhinitis/asthma patients were sensitized to Timothy grass, while another study showed that some Bermuda grass sensitized patients were also co-sensitized to Mugwort. 5 It is likely that these allergens might contain the same protein components inducing cross-reactive. Alternatively, it could be due to the cross-reactive carbohydrate determinants (CCD), a substance that led to positive detection results without causing any allergic symptoms, which was found in a large number of plant allergens. 6 Ebo et al found that 23.5% of Timothy grass sensitized patients were positive for CCD, and research from China showed that the positive rates of CCD in patients who were Mugwort sensitized was 9.0%. 7 However, research is lacking to analyse the relationship between Bermuda grass, Mugwort, Timothy grass allergen components and CCD in southern China, which contains a large population and a variety of vegetation. Component-resolved diagnosis (CRD) can distinguish true sensitized components and cross-reactive protein from the molecular allergens, which provides the possibility for accurate diagnosis and treatment. 8 Thus, this study used CRD technology to detect the serum sIgE of Bermuda grass, Mugwort, Timothy grass allergen components and CCD at the same time, aiming to investigate the relationship of Bermuda grass, Mugwort, Timothy grass allergen components and CCD, providing guidance for clinical diagnosis and treatment. Materials and Methods Ethics This study was reviewed and approved by the ethics committee of First affiliated hospital of Guangzhou Medical University (GYFYY-2016-73). The use of human serum samples was in accordance of legislation in China and the wishes of donors, their legal guardians or next of kin, where applicable, who had offered written informed consent to using the serum samples for future unspecified research purposes. Study Design This was a cross-sectional study. Patients with Bermuda grass sensitization from Allergy Information Repository of State Key Laboratory of Respiratory Disease (AIR-SKLRD) during June 2017 to June 2019 in southern China were included by the following inclusion criteria: 1) the history of pollen exposure and lead to respiratory allergic symptoms, such as wheezing, dyspnea, sneezing, runny nose, nasal obstruction or nasal itching; and 2) the specific IgE (sIgE) level of Bermuda grass (Cynodon dactylon) was ≥ 0.35 kU/L. Patients with specific immunotherapy, cancer, autoimmune diseases and parasitic diseases were excluded. A total of 78 patients sensitized to Bermuda grass underwent detection of sIgE of Bermuda grass, Mugwort (Artemisia vulgaris), Timothy grass (Phleum pratense) allergen components and CCD. The diagnosis of asthma, rhinitis, chronic obstructive pulmonary diseases (COPD) and chronic cough were based on guidelines of GINA, 9 ARIA, 10 GOLD-COPD 11 and The Chinese national guidelines on diagnosis and management of cough, 12 respectively. All patients included in this study were not combined with multiple respiratory diseases. Details of study subjects are shown in Table 1, and the average age was 34.60±23.30 years old. Statistical Analysis Statistical studies were conducted with SPSS 25.0 (Chicago, IL, USA). Parametric quantitative data was presented as mean ± standard deviation. Non-parametric quantitative data was presented as median (interquartile range). Categorical data was reported as a percentage showing the proportion of positive results, and chi-square test (χ 2 ) was used to compare the variance of data among the groups. Correlation analysis were performed using Spearman's tests, with the correlation coefficients presented as "r s ". The correlation between allergen components was calculated with optimal scale analysis. P <0.05 was considered statistically significant. The Positive Rate of Bermuda Grass, Mugwort and Timothy Grass Allergen Components Between CCD-Positive and -Negative Patients Compared with CCD-positive patients, those with negative CCD results had significant higher positive rates of Cyn d 1 (47.8% vs 14.5%), Phl p 1 (26.1% vs 7.3%), Phl p 12 (21.7% vs 3.6%) and Art v 4 (26.1% vs 3.6%) (all p < 0.05). However, the positive rate of Phl p 4 (10.9% vs 0.0%) was significantly higher in patients with CCD positive than in patients with CCD negative (p < 0.05) ( Table 2). There were no significant differences in Cyn d 12, Art v 3, Art v 1, Phl p 5 and Phl p 7 between the two populations. The Positive Rates of Bermuda Grass, Mugwort, Timothy Grass Allergen Components and CCD in Different Diseases In the results of all detected pollen allergen components, the positive rates of Cyn d 1 were highest in Journal of Asthma and Allergy 2020:13 submit your manuscript | www.dovepress.com 617 both patients with rhinitis (60.0%) and asthma (31.0%), whereas the highest positive rate was Art v 1 in patients with COPD (33.3%), and in patients with chronic cough, no positive results to any detected allergen components were observed ( Figure 3A-C). Interestingly, patients positive to Cyn d 12 were all asthma patients, with the positive rate of 17.2% ( Figure 3A). In the results of Mugwort allergen components, asthma patients had the highest rate of Art v 3 (17.2%) and Art v 4 (24.1%) ( Figure 3B). Besides, in the results of Timothy grass allergen components, patients with rhinitis had the highest positive rate of Phl p 1 (30.0%), while patients with asthma had the highest positive rate of Phl p 12 (20.7%) ( Figure 3C). Regardless of the disease, CCD-positive rate was high (58.6-83.3%) ( Figure 3D). However, no significant difference of the positive rate of CCD was found between the diseases (χ 2 = 5.122, p > 0.05). Discussion Bermuda grass pollen is one of a common inhaled allergens that can cause rhinoconjunctivitis in a certain amount of people. Component-resolved diagnosis, an advanced method of allergen diagnosis, helps personalized management and treatment of allergic diseases. In this study, we detected the serum sIgE levels of CCD and allergen components of Bermuda grass, Mugwort and Timothy grass in Bermuda grass sensitized patients by CRD. Our results showed that Cyn d 1 had the highest positive rate in pollen allergen components in the study population, followed by Phl p 1, Art v 3, Art v 4, Phl p 12 and Phl p 4. Cyn d 1, Phl p 1 and Art v 3 may be the major sensitization components in Bermuda grass sensitized patients. However, this may also be due to the geographical environment. Guangzhou is a central city of southern China with warm temperature and high humidity, and the climate here may affect the local composition of vegetation pollen and the content of CCD. It is worth noting that CCD was considered as a potential source of positive IgE detecting results without clinical significance, which could happen in the detection of specific IgE of plant allergens. 13 Regarding CCD and plant allergens, Sinson et al has reported that the CCDpositive rate in patients sensitized to Timothy grass was 16.7% 14 and our previous study found that 41.4% of patients with positive sIgE results of Bermuda grass also showed CCD-sIgE positive. 5 In this study, most of the components used in the EuroBlot tests were recombinant molecules, which provided good markers for the diagnosis of sensitization to components. Because certain purified allergen components were glycosylated and it may cause false positive result due to CCD reactivity. Nonetheless, there might be similar antigen epitope between CCD and the allergen components that could also cause IgE reactivity. Further, our results show that the positive rates of CCD ranged from 52% to 88% in various age groups, whereas a large cohort study reported that the overall positive rate of CCD was 22%, where teenage group reached 35%. 15 The inconsistence of CCD reactivity was possibly because our study population was positive for Bermuda grass. Furthermore, the high CCD positivity in our study may also suggest that serologic screening results of Bermuda grass without CCD blocker might not be reliable in grass pollen allergic subjects. Our result also showed that the positive rate of Cyn d 1, Art v 4, Phl p 1 and Phl p 12 in patients with CCD-negative result were significantly higher than in patients with CCD-positive result. This indicated that the positivity of CCD might have less impact in the detection of these allergen components. However, since inhibition assays were not performed in this study, the cross-reaction of CCD and pollen allergen components remain to be investigated. Despite this, our study demonstrated a high positivity of CCD in Bermuda grass sensitized population and provided the profile of CCD positivity in various pollen allergen components, which may aid in improving the diagnosis of Bermuda grass sensitization and give understanding of the crossreaction patterns of CCD in these allergen components. In addition, we reported a considerably high proportion of polysensitization in patients sensitized to pollen components, which should be noticed in clinical practice. In the United States and Europe, polysensitization is more prevalent in patients with moderate to severe respiratory diseases than monosensitization (range, 50%-80%). 16 Another study showed that the polysensitization can even reach 97.4%. 17 Additionally, polysensitization may also relate to multimorbidity, especially in children. 18 Due to some of the clinical data of the studied population not being available, the multimorbidity as well as disease severity were not analyzed in polysensitization patients. Besides, we have noticed that in patients <18 years old, Cyn d 1 had the highest positive rate, while in patients ≥60 years old, Art v 1 and Phl p 4 had the highest positive rate. This might not only be due to the discrepancy of the status of the immune system in different ages, but also relate to the living environment and lifestyles. A study in adolescents showed that the positive rate of Cyn d 1 was the second highest (36.0%), after the positive rate of Phl p 1 (39.0%). 19 Although the allergen component with the highest positivity was different from our results, which might potentially be owning to the discrepancy of populations and/or geographical regions, both of the results showed high positive rate of Cyn d 1 in adolescents. Therefore, the sensitization patterns may differ in different ages, and it seems that adolescents allergic to Bermuda grass were more likely to be sensitized to Cyn d 1. Meanwhile, we compared the sensitization pattern in different diseases. Patients with allergy rhinitis had the highest positive rate of Cyn d 1. Interestingly, we found that patients positive for Cyn d 12 were all asthma patients. Although our sample size was limited, this could also reveal the characteristics of sensitization pattern in the disease. A study in Argentina has shown that 43.5% of asthma patients were positive to Cyn d 12. 3 With the advantage of CRD, which provides a more accurate sensitization profile, we were able to identify the correlation between disease and particular allergen components. DovePress Based on our results, there was an association between asthma and Cyn d 12, and thus, it should be considered to prevent asthma if patients were sensitized to Cyn d 12. Further studies with larger sample size were required to reveal the correlation of asthma and Cyn d 12 sensitization. There were several limitations in this study. Firstly, our sample size was small. This was mainly due to the major sensitization allergen in Guangzhou being house dust mites (HDM), where the number of patients with weed pollen sensitization was relatively small. 20,21 Secondly, because of the availability of patients' clinical information, the associations between sensitization components and disease severity was not investigated. Thirdly, due to the limitation of conditions, none of our patients did the skin prick test and we did not include other allergic subjects in this study. Last but not least, inhibition assays were not performed and therefore the false positive rates of the detected allergen components were not clear as a result of the presence of CCD. Despite this, we compared the positive rates of pollen allergen components between CCD-positive andnegative patients, which provides the profile of allergen components that were likely to be interfered by CCD. Conclusion In conclusion, the highest positive rate of pollen allergen components was Cyn d 1 in Bermuda grass sensitized patients in southern China. Most patients were sensitized to CCD alone. The positive rate of Cyn d 1, Art v 4, Phl p 1, and Phl p 12 in patients with CCD negative were higher than in patients with CCD positive, while the positive rate of Phl p 4 was the contrary. Moreover, the sensitization patterns differed in various ages, where patients <18 years old exhibited the highest positive rate of Cyn d 1, and patients >60 years old showed the highest positive rate of Art v 1 and Phl p 4. In addition, Cyn d 12 sensitization might be closely associated with asthma. Data Sharing Statement The data that support these findings are available on reasonable request from the corresponding author BQS. Data are not publicly available due to concerns regarding research participant privacy.
2020-11-19T09:16:31.828Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "3ab8173ea773237f2f4c7a6661278d55a1a57e68", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=63759", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77a63255e07c82044536af0c24e3a1d2980e5bb4", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
256291009
pes2o/s2orc
v3-fos-license
Annotating very high-resolution satellite imagery: A whale case study The use of very high-resolution (VHR) optical satellites is gaining momentum in the field of wildlife monitoring, particularly for whales, as this technology is showing potential for monitoring the less studied regions. However, surveying large areas using VHR optical satellite imagery requires the development of automated systems to detect targets. Machine learning approaches require large training datasets of annotated images. Here we propose a standardised workflow to annotate VHR optical satellite imagery using ESRI ArcMap 10.8, and ESRI ArcGIS Pro 2.5., using cetaceans as a case study, to develop AI-ready annotations.• A step-by-step protocol to review VHR optical satellite images and annotate the features of interest.• A step-by-step protocol to create bounding boxes encompassing the features of interest.• A step-by-step guide to clip the satellite image using bounding boxes to create image chips. a b s t r a c t The use of very high-resolution (VHR) optical satellites is gaining momentum in the field of wildlife monitoring, particularly for whales, as this technology is showing potential for monitoring the less studied regions. However, surveying large areas using VHR optical satellite imagery requires the development of automated systems to detect targets. Machine learning approaches require large training datasets of annotated images. Here we propose a standardised workflow to annotate VHR optical satellite imagery using ESRI ArcMap 10.8, and ESRI ArcGIS Pro 2.5., using cetaceans as a case study, to develop AI-ready annotations. • A step-by-step protocol to review VHR optical satellite images and annotate the features of interest. • A step-by-step protocol to create bounding boxes encompassing the features of interest. • A step-by-step guide to clip the satellite image using bounding boxes to create image chips. Specifications table Subject area: Earth and Planetary Sciences More specific subject area: Earth observation Name of your method: Satellite image annotation to create point, bounding boxes and image datasets to train automated systems. Name and reference of original method: Cubaynes, H.C., Fretwell, P.T. (2022) Whales from space dataset, an annotated satellite image dataset of whales for training machine learning models. Sci. Data 9 , 245. https://doi.org/10.1038/s41597-022-01377-4 Resource availability: Software: ESRI ArcGIS Pro 2.5, ESRI ArcMap 10.8 Background The latest advancements of very high-resolution (VHR) optical satellite imagery (below 1 m spatial resolution) show tremendous potential for monitoring wildlife in recent trials [1][2][3][4][5][6] . There are also a few VHR satellites with synthetic aperture radar (SAR) sensor, which can image in the dark and through clouds by returning an image of the surface roughness. However, SAR sensor applications to wildlife surveys is at an early stage [4] . Therefore, in this study we focus on VHR optical satellites, and refer to them as VHR satellites in the remainder of the text. VHR satellite imagery is currently being assessed as a complementary approach to traditional survey methods for monitoring whales, and is particularly beneficial for less studied regions and over large areas [3 , 7] . Monitoring whales is crucial, particularly for estimating abundance and distribution, which is of broad interest to government agencies, academic, and commercial institutions around the globe. Some countries are legally required to monitor marine mammals inhabiting their national waters, such as the US with the Marine Mammal Protection Act 1972 [8] , and Australia with the Environment Protection and Biodiversity Act 1999 [9] . Whale abundance and trends are monitored to assess their status and recovery from commercial whaling and other anthropogenic threats ( e.g. ship strike, entanglement in fishing gear, noise pollution) [10][11][12] . Research using VHR satellite images to monitor cetaceans has increased since Abileah (2002) [13] and Fretwell et al. (2014) [14] pioneering studies, highlighting how VHR satellite imagery may help gather missing information about whales, and complement boat and aircraft surveys [3 , 15-23] . There have been developments in using this technology in remote regions to estimate whale density [17] , detect strandings [21 , 24 , 25] , and count cetaceans [18] . Each study highlights the challenges that need addressing and the further work required but agree on the opportunity this technology offers for monitoring whales in remote regions. Among the challenges to scale this technology to its full potential, is the need to analyze the imagery efficiently using automated systems, with machine learning approaches being presented as most suitable for wildlife [15 , 26-28] . In machine learning, models are trained to recognize and classify visual objects through an iterative process, where many examples of the target object are fed into model training [29 , 30] . Machine learning models require a large annotated dataset of the target species and sometimes confounding features to train and test the algorithms. Initially, these datasets need to be created by humans manually annotating imagery, until automated or semi-automated systems can accurately identify the target feature. Such datasets, openly accessible, are few, with Cubaynes and Fretwell (2022) [31] dataset, which include point, and bounding box annotations, and image chips; and Charry et al. (2021) [18] dataset, which include point annotations. Ideally, the creation of such a dataset would be a collaborative innovative effort using similar protocols and data formats [31] . Our aim is to share a detailed step-by-step workflow for annotating VHR satellite images and for creating datasets of annotations as points, bounding boxes, and image chips in a png format, which will facilitate collaboration across research groups towards the development of an operational system for marine animal detection in VHR satellite imagery. Here we provide a general outline of the steps required to annotate satellite images, and create datasets, alongside detailed protocols for ESRI ArcMap 10.8 (Supplemental 1) [32] and ESRI ArcGIS Pro 2.5 (Supplemental 2) [33] , as used by several studies detecting wildlife in VHR satellite imagery [3 , 17 , 19 , 26 , 31] but with more details to allow reproducibility and transferability. We use cetaceans as a case study to explain the steps, which are transferable to other objects that can be individually labelled in VHR optical satellite imagery. We also provide guidance on ways to differentiate species of cetaceans in VHR satellite image (Supplementary material 3), as well as assessing the certainty of the detection (Supplementary material 5). Method details Step 1: Image acquisition The first step to detecting or counting whales in VHR satellite imagery is to acquire the image (step 1 of Fig. 1 ). Images can be delivered in different formats. Most VHR satellites capture a panchromatic image (one band, greyscale image, highest spatial resolution) and a multispectral image (multi bands, usually four or eight bands, colored image, lower spatial resolution than the panchromatic image), except for the WorldView-1 satellite, which only captures a panchromatic image. The main operators of VHR satellites are Airbus, Maxar Technologies, and Planet. Table 1 shows the sensors in orbit for each of these operators, as well as the planned future missions. Due to the commercial nature, VHR satellite imagery is expensive, with discounts available for education and research. We recommend contacting the separate companies to get quotes. VHR satellites do not continuously capture images; they attempt to collect imagery over target locations when tasked to do so. The success of tasking satellite image acquisition is influenced by the satellite schedule, cloud cover, and competing priorities. Once images have been acquired, the images then get added to the archive where they are available for anyone to purchase. Purchasing archival imagery is more affordable than requesting a custom tasking of image collection for a specific time and location. Step 2: Pre-processing Before annotating an image, there are a few pre-processing steps that may be needed depending on the type of product acquired (step 2 of Fig. 1 ). The type of product varies between satellites, operators, but tend to be a variation on whether images are projected, or pansharpened ( Table 2 ). Other pre-processing, such as correcting for the top of atmosphere may be needed depending on the survey goals. Table 1 List of VHR satellites with the company operating them and the type of images available. The spatial resolution for each satellite refers to the panchromatic spatial resolution, which is higher than the multispectral image. Projection Projection is the process of mathematically transforming the coordinate system from a sphere to a flat surface. Several coordinate systems exists with some better suited to represent data for different geographic locations. When a satellite captures an image of the Earth surface, it will show some distortions, as the image is a flat surface and the Earth a sphere. This distortion needs to be corrected by assigning the appropriate coordinate system to the image ( Fig. 2 ). If the imagery acquired is not already projected in WGS 1984 with the relevant UTM zone, projecting the image is required before annotation. Pansharpening Pansharpening is the process by which the pixels of the panchromatic image are combined with the pixels of a multispectral image, to produce a new image with the high spatial resolution of the panchromatic image and with the additional color information from the multispectral image ( Fig. 3 ). We highly recommend this step for manually annotating VHR satellite images, as it improves the ability to discriminate objects in the image. Using only the panchromatic image is possible but the color adds confidence in detection. Images that have already been pansharpened can be acquired from the imagery provider. Detailed pansharpening protocols are outlined in Supplementary material 1 for ESRI ArcMap 10.8 and Supplementary material 2 for ESRI ArcGIS Pro 2.5. Atmospheric correction If the aim of the project is to compare the spectral reflectance of whales between different images, then the images need to be corrected for atmospheric effects. Atmospheric correction removes atmospheric effects, such as scattering and absorption from gas and aerosols present in the atmosphere, this is dependent upon the composition of the atmosphere and the geometry of the collected parameters of the data. Two types of atmospheric corrections exist to obtain spectral reflectance, top-of-atmosphere, and bottom-of-atmosphere [37] . Top-of-atmosphere correction requires parameters based upon the mean solar spectral irradiance, solar zenith angle, and spectral radiance at the sensor's aperture. These are available from the imagery metadata and can almost always be applied to VHR satellite imagery [38][39][40] . The bottom-of-atmosphere (sometimes referred to as full atmospheric correction) will give the spectral reflectance of the feature as it would be if measured at the surface of the earth. It will allow true comparison of the spectral of pixels between different satellite images, taken at different times with different atmospheric conditions. However, this full atmospheric correction requires knowing the accurate composition in gas and aerosols of the atmosphere at a given time. This is difficult to estimate accurately, as it varies among regions, days and time of day, requiring in situ measurements, or use of atmospheric composition models accurate for the location studied [37] . These are rarely available at field sites. Therefore, when comparing whale spectra between images, at minimum the top-of-atmosphere correction should be applied. This can be done in ENVI, similar to Cubaynes et al. (2019) [3] , or other available software. In ArcGIS Pro, the Apparent Reflectance function allows to correct the top-of-atmosphere for the following VHR satellites: IKONOS, QuickBird, GeoEye-1, RapidEye, DMCii, WorldView-1, WorldView-2, SPOT 6, and Pleiades [41] . Step 3: Systematic scanning To ensure that the whole image is reviewed for the presence of cetaceans, systematic scanning is necessary (step 3 of Fig. 1 ). A grid needs to be overlayed on top of the satellite image to review it in a systematic pattern from the top to the bottom of the image, scanning left to right, then right to left, etc. We recommend reviewing the image at a scale of 1:1500 for large cetaceans (animals between 9 and 20 m long) and zooming in as needed. For the larger whale species (above 25 m long) such as fin whales ( Balaenoptera physalus ) and blue whales ( Balaenoptera musculus ) a scale of 1:2000 is sufficient, and for smaller cetaceans (less than 9 m long) we recommend using a scale of 1:1250 [3 , 17 , 18] . As some images can cover a large area (more than 500km 2 ), it could take days to review it fully; therefore, we recommend keeping track of the grid cells that have been reviewed by following the steps outlined in Supplementary material l 1 for ESRI ArcMap 10.8 and Supplementary material 2 for ESRI ArcGIS Pro 2.5. Step 4: Annotating Annotating consists of labeling your imagery by placing points or bounding boxes on the object of interest, in this case whales (step 4 of Fig. 1 ) and filling in the relevant information needed for your machine learning model, such as the species name ( Table 3 ). In ESRI ArcMap and ESRI ArcGIS Pro, points can be stored in a shapefile, which retains the coordinate information of the points, alongside any associated metadata. An important aspect of annotating is assessing the confidence in the detection of the target object. We have built a workflow to help assess species identification ( Fig. 4. ; see Supplementary material 3 for more details) and assign a certainty level (see Supplementary material 5). Detailed instructions to annotate VHR satellite images are outlined in Supplemental 1 for ESRI ArcMap 10.8 and Supplementary material 2 for ESRI ArcGIS Pro 2.5. Fig. 4. Species decision tree for cetaceans previously observed in VHR satellite imagery. Table 3 List of fields recommended to include in the attribute table for annotating cetaceans in VHR satellite images, although these may vary with project goals. Field Description Observer Name of person reviewing the image. location Name of the location where the satellite image was captured. Satellite Name of the satellite that captured the image. Ground sampling distance The ground sampling distance (the distance between the center points of each pixel), which can be found in the metadata, by right clicking on the panchromatic file and selecting "Properties ", then "Source " and "Raster Information ". Image id Unique identification that the satellite imagery provider assigns to each image. With Maxar, this corresponds to the catalog ID. Image date Date the image was captured. Image time Time the image was captured. Product type The product type indicates the level of pre-processing an image has gone through when it was acquired from the satellite imagery provider, such as projection. See Table 1 for the various product type offered by the main VHR satellite imagery providers. Other environmental conditions that the observer thinks might limit the visibility of whales ( e.g. dark image for polar regions from autumn to spring) Latitude Latitude of the whale detection Longitude Longitude of the whale detection Geographical coordinate system Geographical coordinate system, it can be found in the metadata Projection Projection applied to the image to remove distortion Species code Species code for the species or the next higher taxonomic level, see Supplementary material 3 to help you decide, and Supplementary material 4 for the code to use Certainty Certainty of the assignment of the species or the next higher taxonomic level. See Supplementary material 5 to help you decide. 1 = Definite : you are confident in your species determination (90-100% confidence) 2 = Probable : you think that your species determination is likely but you are not sure (60-90% confidence) 3 = Possible : you think that your species determination is possible but it is hard to tell (10-60% confidence) Body color Body color of the whale when at the surface (dorsally when viewed in VHR satellite imagery) Body shape Overall Any other comment the observer would like to make about the specific detection Step 5: Creating bounding boxes Although point shapefiles of annotated cetaceans may be useful to automate detection, particularly for approaches utilizing spectral signatures, bounding boxes are often desired for training machine learning models [15 , 26] . Similar to Cubaynes and Fretwell (2022) [31], these boxes can be created from the point shapefile incorporating the metadata from the attribute table, so each bounding box has a set of specific information attached to it, necessary for automation (step 5 of Fig. 1 ). We recommend making the bounding box at least twice the size of the known adult size for the species of interest. Step 6: Creating image chips Image chips can be created by using the bounding boxes to clip the satellite image into several image chips that contain cetaceans (see details in Supplementary material 1 for ESRI ArcMap 10.8 and Supplementary material 2 for ESRI ArcGIS Pro 2.5; step 6 of Fig. 1 ). VHR satellite images have limited distribution due to licensing restrictions. Some licenses, such as the group license with Maxar Technologies permits the sharing of subsets of the images as a png or jpeg format (with reduced spectral resolution and lacking spatial reference, and reduced spectral resolution) [31]. Therefore, it is important to verify with the satellite imagery provider what can be shared ( e.g. format, subset or whole image) and with whom (under certain licenses sharing the raw images with collaborators is feasible). Methods validation The workflow for ESRI ArcMap 10.8 was developed and used by several studies [3 , 17 , 19 , 31] with updates for ArcMap 10.8. None of these studies offered a step-by-step guide. The workflow for ESRI ArcGIS Pro 2.5 was adapted from the ArcMap workflow. Ethics statements This method does not involve work with human subjects, nor animal experiments, nor data collected from social media platforms. Funding This work was supported by the Marine Mammal Commission (project MMC21-043 ). This study represents a contribution of the Ecosystems component of the British Antarctic Survey, funded by the Natural Environment Research council (NERC). This work also represents a contribution of the Geospatial Artificial Intelligence for Animals (GAIA) project. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data Availability No data was used for the research described in the article.
2023-01-27T16:16:51.135Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "1a1a171437c3ee61aeb4f202bf77723d4b71c15f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.mex.2023.102040", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57edd40020926580b8291b853d092b8087f047ac", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
128358915
pes2o/s2orc
v3-fos-license
Spin injection and pumping generated by a direct current flowing through a magnetic tunnel junction A charge flow through a magnetic tunnel junction (MTJ) leads to the generation of a spin-polarized current which exerts a spin-transfer torque (STT) on the magnetization. When the density of applied direct current exceeds some critical value, the STT excites high-frequency magnetization precession in the"free"electrode of MTJ. Such precession gives rise to microwave output voltage and, furthermore, can be employed for spin pumping into adjacent normal metal or semiconductor. Here we describe theoretically the spin dynamics and charge transport in the CoFeB/MgO/CoFeB/Au tunneling heterostructure connected to a constant-current source. The magnetization dynamics in the free CoFeB layer with weak perpendicular anisotropy is calculated by numerical integration of the Landau-Lifshitz-Gilbert-Slonczewski equation accounting for both STT and voltage controlled magnetic anisotropy associated with the CoFeB|MgO interface. It is shown that a large-angle magnetization precession, resulting from electrically induced dynamic spin reorientation transition, can be generated in a certain range of relatively low current densities. An oscillating spin current, which is pumped into the Au overlayer owing to such precession, is then evaluated together with the injected spin current. Considering both the driving spin-polarized charge current and the pumped spin current, we also describe the charge transport in the CoFeB/Au bilayer with the account of anomalous and inverse spin Hall effects. An electric potential difference between the lateral sides of the CoFeB/Au bilayer is calculated as a function of distance from the CoFeB|MgO interface. It is found that this transverse voltage signal in Au is large enough for experimental detection, which indicates significant efficiency of the proposed current-driven spin injector. I. INTRODUCTION In conductive ferromagnetic nanolayers, magnetic dynamics can be induced by a spin-polarized charge current exerting a spin-transfer torque (STT) on the magnetization [1,2]. The STT results from the transfer of angular momentum and provides the opportunity to excite high-frequency magnetization oscillations in nanomagnets by applied direct or alternating (microwave) current [3]- [11]. Furthermore, spin-polarized charge currents with sufficiently high densities lead to magnetization switching in metallic pillars [12,13] and magnetic tunnel junctions (MTJs) [14]- [16]. Such current-induced switching serves as a mechanism for data writing in magnetic random access memories utilizing the STT effect (STT-MRAMs) [17]- [19], while the magnetization precession driven by direct currents in spin-torque nanoscale oscillators (STNOs) creates microwave voltages, which makes STNOs potentially useful as frequency-tunable microwave sources and detectors [8]- [11]. In ferromagnetic nanostructures comprising insulating interlayers, the electric field created in the insulator adjacent to the metallic ferromagnet may significantly affect the magnetic anisotropy of the latter. Such voltagecontrolled magnetic anisotropy (VCMA) results from the penetration of electric field into an atomically thin surface layer of the ferromagnetic metal, which modifies the interfacial magnetic anisotropy [20]- [27]. The presence of VCMA renders possible to induce the magnetization precession in ferromagnetic nanostructures by microwave voltages [28]- [31]. It is also shown that the application of dc voltage to the ferromagnetic nanolayer possessing VCMA can lead to a spin reorientation transition (SRT) [32]- [35]. Moreover, precessional 180 • magnetization switching using electric field pulses has been demonstrated experimentally [36,37]. In addition, the voltage dependence of the interfacial magnetic anisotropy in CoFeB/MgO/CoFeB tunnel junctions may greatly reduce the critical current density required for the STTdriven magnetization reversal [25,34]. Importantly, magnetization precession in a ferromagnetic layer gives rise to spin pumping into adjoining normal metal or semiconductor [38]- [42]. In this paper, we theoretically study the magnetization dynamics driven by a direct current applied to the Co 20 Fe 60 B 20 /MgO/Co 20 Fe 60 B 20 tunnel junction and calculate the time-dependent spin current generated in the Au overlayer. The magnetization evolution in the free Co 20 Fe 60 B 20 layer is determined by solving numerically the Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation, which accounts for the STT created by a spinpolarized tunnel current and for the VCMA associated with the Co 20 Fe 60 B 20 |MgO interface. A range of current densities is revealed, within which a steady-state magnetization precession is generated in the free Co 20 Fe 60 B 20 layer. For this "precession window", frequencies and trajectories of magnetization oscillations are determined and used to calculate the time-dependent spin current created in the Au overlayer. Our calculations are distinguished by the account of both the spin polarization of the charge current and the precession-driven spin pumping as well as the contribution of the latter to the damp-ing of magnetization dynamics. Finally, we solve coupled drift-diffusion equations for charge and spin currents to determine the spatial distribution of the electric potential in the Co 20 Fe 60 B 20 /Au bilayer. II. CURRENT-DRIVEN MAGNETIZATION DYNAMICS We consider an MTJ comprising an ultrathin free layer with the thickness t f smaller than the critical thickness t SRT , below which it acquires a perpendicular magnetic anisotropy [26,43]. The thickness t p of the pinned layer is taken to be larger than t SRT so that the pinned magnetization M p has an in-plane orientation (Fig. 1). Both layers are assumed to be homogeneously magnetized, and the current flowing through the tunnel barrier is regarded uniform. To describe the dynamics of the free-layer magnetization M(t), we employ the macrospin approximation, which is well-suited for magnetic layers with nanoscale in-plane dimensions. Since the magnetization magnitude |M| = M s at a fixed temperature much lower than the Curie temperature can be considered a constant quantity, the LLGS equation may be reformulated for the unit vector m = M/M s [44] and written as where γ > 0 is the electron's gyromagnetic ratio, µ 0 is the permeability of vacuum, α is the Gilbert damping parameter, and H eff is the effective field acting on the magnetization. In Eq. (1), the last term takes into account the STT proportional to the current density J in the free layer, whereas the field-like torque is disregarded because it does not affect the magnetic dynamics qualitatively [11,29]. For symmetric MTJs, the theory gives , where e is the elementary (positive) charge, is the reduced Planck constant, η = (G P − G AP )/(G P + G AP ) and G P and G AP are the MTJ conductances per unit area in the states with parallel and antiparallel electrode magnetizations, respectively [2]. Since we consider the MTJ connected to a constant-current source, the voltage drop V = J/G across the tunnel barrier depends on the junction's conductance G = G P (1 + η 2 m · m p )/(1 + η 2 ), which leads to a non-sinusoidal dependence of the STT on the angle between m and m p . The effective field involved in Eq. (1) is defined by the relation H eff = −(µ 0 M s ) −1 ∂F/∂m, where F (m) is the Helmholtz free energy density of ferromagnetic layer. For a homogeneously magnetized unstrained free layer made of cubic ferromagnet, the magnetizationdependent part ∆F (m) of the effective volumetric energy density may be approximated by the polynomial where m i (i = 1, 2, 3) are the direction cosines of M in the crystallographic reference frame with the x 3 axis orthogonal to the layer surfaces, K 1 and K 2 are the coefficients of the fourth-and sixth-order terms defining the cubic magnetocrystalline anisotropy, K s is the parameter characterizing the total specific energy of two interfaces (Co 20 Fe 60 B 20 |MgO and Co 20 Fe 60 B 20 |Au in our case), U IEC is the energy of interlayer exchange coupling (IEC) with the pinned layer (per unit area), N ij are the demagnetizing factors (N 13 and N 23 are negligible at inplane dimensions L 1 , L 2 >> t f ), and H is the average magnetic field acting on the free layer. Since the magnetic anisotropy associated with the Co 20 Fe 60 B 20 |MgO interface depends on the electric field E 3 in MgO [26,27], the coefficient K s is a function of the current density J. Using a linear approximation for the dependence K s (E 3 ) supported by first-principles calculations [24] and experimental data [27], we arrive at the relation , k s = ∂K s /∂E 3 is the electric-field sensitivity of K s , and V is the voltage drop across the MgO layer of thickness t b , which is caused by the tunnel current flowing through the junction with the conductance G per unit area. The numerical integration of Eq. (1) was realized with the aid of the projective Euler scheme, where the condition |m| = 1 is satisfied automatically. A fixed integration step δt = 0.5 fs was used in our computations. The effective field H eff was determined from Eq. (2) under the assumption of negligible total magnetic field H acting on the free layer, which is justified by the absence of external magnetic sources and zero mean value of the current-induced Oersted field. Since in the considered heterostructure the magnetization dynamics in the free layer leads to the spin pumping into adjacent nonmagnetic layer, the parameters γ and α involved in Eq. (1) were renormalized as [38] where γ 0 and α 0 denote the bulk values of γ and α, g L is the Landé factor, µ B is the Bohr magneton, and g r ↑↓ is the complex reflection spin-mixing conductance per unit area of the ferromagnet/normal metal contact [45]. The Gilbert parameter α 0 was regarded as a constant quantity, because numerical estimates show that the dependence of α 0 on the power of magnetization precession [46] is negligible in our case. The numerical calculations were performed for the Co 20 Fe 60 B 20 /MgO/Co 20 Fe 60 B 20 junction with the barrier and electrode thicknesses equal to t b = 1.1 nm, t f = 1.73 nm, and t p = 5 nm. A rectangular inplane shape and nanoscale dimensions L 1 = 400 nm and L 2 = 40 nm were chosen for the free layer. The demagnetizing factors of such ferromagnetic layer, calculated from the available analytic formulae [47], were found to be N 11 = 0.0059, N 22 = 0.0626, N 12 = 0, and N 33 = 0.9315. A high in-plane aspect ratio L 1 /L 2 = 10 was given to the free layer in order to make energetically more favorable the magnetization orientations in the plane perpendicular to the pinned magnetization M p , which enhances the STT acting on M. The pinned layer was assumed to have a large area ensuring negligible contribution of the magnetostatic interlayer interaction to the free-layer energy ∆F in comparison with that of the IEC defined by the relation U IEC ≈ 5.78exp(−7.43×10 9 t b m −1 ) mJ m −2 [48]. The saturation magnetization M s = 1.13 × 10 6 A m −1 [49] and the Gilbert damping constant α 0 = 0.01 [43] were assigned to the Co 20 Fe 60 B 20 free layer, while its magnetocrystalline anisotropy was described using the coefficients K 1 = 5 × 10 3 J m −3 [50] and K 2 = 50 J m −3 [29]. To quantify the VCMA associated with the Co 20 Fe 60 B 20 |MgO interface, we used the measured pa- [43] and k s = 37 fJ V −1 m −1 [29]. The junction's conductance G P at the chosen MgO thickness was taken to be 8.125 × 10 9 S m −2 [51], and we used typical asymmetry parameter η = 0.57 [16,27] which yields the tunneling magnetoresistance ra- The numerical calculations started with the determination of the equilibrium magnetization orientation in the free Co 20 Fe 60 B 20 layer at zero applied current. It was found that the initial energy landscape ∆F (φ 0 , θ 0 ) has only two minima, which correspond to almost perpendicular-to-plane directions of the free-layer magnetization M. Owing to the IEC with the in-plane magnetized pinned Co 20 Fe 60 B 20 layer, the magnetization M slightly deviates from the perpendicular-to-plane orientation, tilting towards the pinned magnetization M p oriented along the x 2 axis (φ 0 = 90 • , θ 0 = 0.45 • or 179.55 • , see Fig. 1). The energy barrier for the coherent magnetization switching at room temperature T r is about 60k B T r , where k B is the Boltzmann constant. Importantly, the perpendicular magnetic anisotropy is sufficient to prevent the coexistence of metastable states with an in-plane orientation of M, which otherwise could temporarily show up due to thermal fluctuations. The application of a small current to the MTJ modifies the equilibrium magnetization orientation because the interfacial magnetic anisotropy changes due to a voltage drop V = J/G across the barrier and a nonzero τ STT (J) appears in Eq. (1). The simulations showed that at J < 0 the magnetization M progressively rotates towards the PP direction with increasing current, remaining stable up to very high densities |J| < 10 10 A m −2 . On the contrary, the deviation of M from the PP direction increases when a positive current is applied to the MTJ (J > 0), reaching θ = 7.54 • just below the critical density J min ∼ = 3.9 × 10 9 A m −2 at which the magnetization dynamics arises. Remarkably, the predicted value of J min falls into the range of lowest critical current densities |J min (t f )| = (1.2−5.4)×10 9 A m −2 measured experimentally up to date [10]. Therefore, we focus below on the magnetic dynamics induced by positive applied currents, which correspond to the tunneling of electrons from the free layer into the pinned one. tion precession around in-plane (IP) direction antiparallel to the pinned magnetization M p . The appearance of such electrically driven SRT can be attributed to the proximity of the free-layer thickness t f = 1.73 nm to the critical thickness t SRT = 1.745 nm, at which the size-induced SRT should take place in the considered MTJ at J = 0. Indeed, the change ∆K s = k s J min /(Gt b ) in VCMA promotes voltage-driven SRT to the IP magnetization orientation parallel to the x 1 axis, while the STT gives rise to the precession of m. The proximity to the thicknessinduced SRT also explains very large precession amplitude at J min . With increasing current density J > J min , the frequency of steady-state magnetization precession rises, whereas its amplitude becomes smaller [ Fig. 2(b)]. The precession frequency ν ranges from 0.95 GHz at J min to 1.54 GHz at the maximal density J max = 5.4 × 10 9 A m −2 above which the precession disappears [52]. Owing to strong STT, the free-layer magnetization stabilizes at J > J max along the direction antiparallel to the magnetization of the pinned Co 20 Fe 60 B 20 layer. III. SPIN AND CHARGE CURRENTS IN NORMAL-METAL OVERLAYER The electrically induced magnetic dynamics in the free Co 20 Fe 60 B 20 layer should lead to the spin pumping into the Au overlayer. The spin-current density can be specified by a tensor J s characterizing both the direction of spin flow defined by the unit vector e s and the orientation of spin polarization [53]. Since the Co 20 Fe 60 B 20 thickness is well above a few monolayer range, the imag-inary part of the reflection spin-mixing conductance g r ↑↓ and the transmission spin mixing conductance g t ↑↓ are negligible. Therefore, the pumped spin-current density J sp in the vicinity of the Co 20 Fe 60 B 20 |Au interface can be calculated from the approximate relation e s · J sp ∼ = ( /4π)Re[g r ↑↓ ]m × dm/dt [45]. Adopting for the Co 20 Fe 60 B 20 |Au interface the theoretical estimate (e 2 /h)Re[g r ↑↓ ] ≈ 4.66 × 10 14 Ω −1 m −2 obtained for the Fe|Au one [45], we calculated the spin current pumped into Au during the magnetization precession in the free layer. Figure 3 shows representative time dependences of the nonzero spin-current densities J sp 3k (t) (k = 1, 2, 3), which correspond to the magnetization dynamics m(t) appearing at the critical charge-current density J min . Interestingly, J sp 32 contains significant dc and ac components, whereas J sp 31 and J sp 33 are dominated by the ac component. In the steady-state regime, J sp 32 oscillates with the frequency 2ν, which is two times higher than the precession frequency ν due to similar oscillations of the direction cosine m 2 . Taking into account the spin polarization of the charge current governed by the free-layer magnetization M(t), we calculated the total spin-current density J s = J sp +J sc at the Co 20 Fe 60 B 20 |Au interface. Thus, the Co 20 Fe 60 B 20 /MgO/Co 20 Fe 60 B 20 tunnel junction excited by a direct charge current can be employed for the generation of spin currents in normal met-als [54]. The power dissipation W min ∼ = J 2 min L 1 L 2 G −1 of such electrically driven spin injector is estimated to be below 40 µW which is a very small value for devices with sub-micrometer size [10]. To evaluate the efficiency of the proposed spin injector, we calculated the electrical potential difference ∆V (x 3 , t) = φ(x 1 = L 1 /2, x 3 , t) − φ(x 1 = −L 1 /2, x 3 , t) between the lateral sides of the Co 20 Fe 60 B 20 Au bilayer. Owing to the inverse spin Hall effect, the spin flow in the normal metal creates such transverse voltage signal, which can be used to detect this flow experimentally [40]. To determine the distribution of the electric potential φ(r, t) in the Co 20 Fe 60 B 20 /Au bilayer, we solved the coupled drift-diffusion equations [53,55,56] and spin currents flowing in the Co 20 Fe 60 B 20 and Au films. The continuity equations for the charge-current density J and the spin-current density J s have the form of ∇·J = −∂ρ/∂t and −1 ∇·J s = −∂P/∂t−P/τ sf , where ρ is the charge density, P is the spin polarization density, and τ sf is the spin-flip relaxation time. Since spatial variations in the electron concentration n can be neglected for metals [56], explicit expressions for the densities J and J s reduce to where E is the electric field, µ is the electron mobility, D is the diffusion coefficient, α SH is the spin Hall angle, ε ikl denote the components of the Levi-Civita tensor (i, k, l = 1,2,3), and the Einstein summation convention is implied. In Eq. (4), the second term describes the anomalous Hall effect characteristic of ferromagnetic metals, while the third term represents the inverse spin Hall effect. The first term in Eq. (5) gives the contribution of the spinpolarized charge current; the last term accounts for the spin Hall effect, which manifests itself in the currentinduced spin accumulation near sample boundaries. The continuity equations were supplemented by appropriate boundary conditions, which should be fulfilled at the Co 20 Fe 60 B 20 |Au interface and the outer boundaries of the Co 20 Fe 60 B 20 /Au bilayer connected to a constantcurrent source via a gold nanoplate (Fig. 1). At the MgO|Co 20 Fe 60 B 20 interface, the projection J 3 of the charge current density J on the x 3 axis of our reference frame orthogonal to the interface was set equal to the density J 0 of the tunnel current. In addition, the vector J was taken to be parallel to the x 3 axis near the lateral faces of the Co 20 Fe 60 B 20 /Au bilayer and at the contact with the Au nanoplate, where J satisfies the equality J = (L 1 /d)J 0 involving the nanoplate thickness d = 5 nm along the x 1 axis. At the Co 20 Fe 60 B 20 |Au interface, we specified the spin-current density J s via the boundary condition e n · J s = e n · J sp − (J 0 /2e)p f , where e n is the unit normal vector of the interface, and p f = (N ↑ − N ↓ )/(N ↑ + N ↓ )m is the spin polarization of the ferromagnetic layer defined by the densities of states of spin-up (N ↑ ) and spin-down (N ↓ ) electrons at the Fermi level [57]. Of course, the spin-current direction e s was taken to be parallel to the lateral faces of the Co 20 Fe 60 B 20 /Au bilayer in the vicinity of these faces. The sought functions φ(r, t) and P(r, t) were calculated numerically by solving the system of differential continuity equations with the aid of a finite-element method. The calculations were performed in the quasistatic approximation (∂ρ/∂t = ∂P/∂t = 0), which is justified by the fact that the period 1/f ∼ 1 ns of the currentinduced magnetization precession is much longer than the characteristic time of charge (∼0.1 ps [58]) and spin (τ sf < 100 ps [59]) equilibration. Since the size L 2 of the Co 20 Fe 60 B 20 /Au bilayer along the x 2 axis is taken to be much smaller than the size L 1 along the x 1 one (L 2 /L 1 = 0.1), variations of the potential φ and the spin polarization density P along the coordinate x 2 can be ignored. Therefore, we restricted our numerical cal-culations to the solution of a two-dimensional problem enabling us to determine the functions φ(x 1 , x 3 , t) and P(x 1 , x 3 , t). In addition, only the component J sp 32 of the pumped spin current was taken into account in the calculations, because it was found that the components J sp 31 and J sp 33 have a negligible effect on the sought output voltage ∆V (x 3 , t) of the device. The thickness of Au overlayer along the x 3 axis was chosen to be much larger than the Au spin-diffusion length λ sd = √ Dτ sf = 35 nm [41] and set equal to 400 nm. In the numerical calculations, the conductivity σ = eµn of Co 20 Fe 60 B 20 was taken to be 4.45 × 10 5 S m −1 [60], which yields the electron mobility µ = n −1 2.8×10 26 m −1 V −1 s −1 . The anomalous Hall angle α AH = 2α SH and the spin polarization p f of Co 20 Fe 60 B 20 were assumed to be 0.02 [61] and 0.53 [57], respectively. For Au, the conductivity equals 4.5 × 10 7 S m −1 [62], which gives µ = 4.81 × 10 −3 m 2 V −1 s −1 and D = 1.25 × 10 −4 m 2 s −1 . The spin-flip relaxation time τ sf and the spin Hall angle of Au were taken to be 9.84 ps and 0.0035 [41]. It should be noted that the spin polarization density in the free Co 20 Fe 60 B 20 layer was assumed uniform to ensure consistency with the macrospin approximation used to describe the magnetization dynamics. Using the obtained functions φ(x 1 , x 3 , t) and P(x 1 , x 3 , t), we calculated spatial distributions of the charge-current density J(x 1 , x 3 , t) in the Co 20 Fe 60 B 20 /Au bilayer and the electrical potential difference ∆V (x 3 , t) between its lateral sides. Interestingly, the charge-current distribution at any fixed moment t can be represented as a sum of the applied uniform current J 0 and a vortex-like contribution δJ(x 1 , x 3 , t) illustrated by Fig. 6. The transverse voltage signal ∆V (x 3 , t) generated by the device decreases with increasing distance x 3 from the MgO|Co 20 Fe 60 B 20 interface, falling rapidly within the Co 20 Fe 60 B 20 layer [ Fig. 7(a) Remarkably, the analysis of the numerical results obtained for the transverse voltage reveals that ∆V (x 3 , t) can be fitted with a high accuracy by the analytical formula where the first term represents the contribution ∆V AHE of the anomalous Hall effect, while the second term describes the contribution ∆V ISHE resulting from the inverse spin Hall effect. Since the coefficients A(x 3 ) and B(x 3 ) involved in Eq. (6) have very different dependences on the distance x 3 (see Fig. 8), the ratio ∆V ISHE /∆V AHE changes strongly across the Co 20 Fe 60 B 20 |Au interface. Figure 9 demonstrates that this ratio is mostly very small inside the Co 20 Fe 60 B 20 layer, but rises steeply near the Co 20 Fe 60 B 20 |Au interface and exceeds 5 in the Au layer. Hence, measurements of the average voltage signal created by the Co 20 Fe 60 B 20 layer provide information on the anomalous Hall effect, whereas the potential difference ∆V (x 3 , t) between the faces of the Au layer measured at x 3 > 25 nm characterizes the inverse spin Hall effect. Figure 10 shows how the dc and ac components of the transverse signal ∆V averaged over the thickness t f of the Co 20 Fe 60 B 20 layer vary with the charge-current density J. It can be seen that the curves are similar to the dependences J sc 32 (J) presented in Fig. 5, which describe the spin injection into Au caused by the spin-polarized charge current. In contrast, Fig. 7(b) demonstrates the dc component ∆V (J) and the amplitude δV amp (J) of the GHz-frequency ac component calculated at x 3 = 40 nm. Importantly, both the dc and microwave signals appear to be large enough for the experimental detection within the precession window. Moreover, the dependences ∆V (J) and δV amp (J) repeat the graphs shown in Fig. 5 for the total spin-current density J s 32 generated at the Au|Co 20 Fe 60 B 20 interface, differing by a constant factor of 20.61 nV µJ −1 m 2 only. Hence the measurements of ∆V by nanocontacts placed at distances δx 3 ∼ λ sd from the boundary of ferromagnetic layer provide information on the spin injection into the normal metal. IV. SUMMARY In summary, we presented a comprehensive theoretical study of the spin dynamics and charge transport in the Co 20 Fe 60 B 20 /MgO/Co 20 Fe 60 B 20 /Au tunneling heterostructure connected to a constant-current source. The performed numerical calculations enabled us to find the range of current densities, within which electrically driven magnetization precession appears in the free Co 20 Fe 60 B 20 layer, and to determine the precession frequencies and trajectories. Remarkably, a novel, dynamic SRT has been predicted, which is caused by the joint impact of STT and VCMA and has the form of magnetization reorientation between initial static direction and final dynamic precessional state. The results obtained for the magnetization dynamics were then used to evaluate the dc and ac components of the spin current generated in the Au overlayer owing to the precession-driven spin pumping and the oscillating spin polarization of the charge current. Taking into account the inverse spin Hall effects and anomalous Hall effects, we finally calculated the charge flow and electric-potential distribution in the Co 20 Fe 60 B 20 /Au bilayer via the numerical solution of coupled drift-diffusion equations for charge and spin currents. It is shown that the potential difference between the lateral faces of the Au layer is large enough for experimental detection, which demonstrates significant efficiency of the described spin injector.
2019-04-23T14:50:46.000Z
2019-04-23T00:00:00.000
{ "year": 2019, "sha1": "2fddcec8362d2fe03b43e7e742e3f38d9d202ebe", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1904.10361", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2fddcec8362d2fe03b43e7e742e3f38d9d202ebe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
4061154
pes2o/s2orc
v3-fos-license
Tuberculosis Screening by Tuberculosis Skin Test or QuantiFERON®-TB Gold In-Tube Assay among an Immigrant Population with a High Prevalence of Tuberculosis and BCG Vaccination Rationale Each year 1 million persons acquire permanent U.S. residency visas after tuberculosis (TB) screening. Most applicants undergo a 2-stage screening with tuberculin skin test (TST) followed by CXR only if TST-positive at > 5 mm. Due to cross reaction with bacillus Calmette-Guérin (BCG), TST may yield false positive results in BCG-vaccinated persons. Interferon gamma release assays exclude antigens found in BCG. In Vietnam, like most high TB-prevalence countries, there is universal BCG vaccination at birth. Objectives 1. Compare the sensitivity of QuantiFERON ®-TB Gold In-Tube Assay (QFT) and TST for culture-positive pulmonary TB. 2. Compare the age-specific and overall prevalence of positive TST and QFT among applicants with normal and abnormal CXR. Methods We obtained TST and QFT results on 996 applicants with abnormal CXR, of whom 132 had TB, and 479 with normal CXR. Results The sensitivity for tuberculosis was 86.4% for QFT; 89.4%, 81.1%, and 52.3% for TST at 5, 10, and 15 mm. The estimated prevalence of positive results at age 15–19 years was 22% and 42% for QFT and TST at 10 mm, respectively. The prevalence increased thereafter by 0.7% year of age for TST and 2.1% for QFT, the latter being more consistent with the increase in TB among applicants. Conclusions During 2-stage screening, QFT is as sensitive as TST in detecting TB with fewer requiring CXR and being diagnosed with LTBI. These data support the use of QFT over TST in this population. Introduction Foreign-born persons accounted for 60.5% of the 11,181 cases reported in 2010 [1]. In 2010, the United States granted one million visas to permanent U.S. residents (immigrants) and three million visas to temporary workers and students (nonimmigrants) [2,3]. Of those, 45% of immigrants and 19% of nonimmigrants were from tuberculosis high-prevalence countries [4], Tuberculosis elimination in the U.S. will require detection and treatment of both tuberculosis disease and latent TB infection before or after arriving in the United States [5][6][7]. CDC mandates screening of immigrant applicants with the primary goal of detecting and treating those with infectious tuberculosis and a secondary goal of preventing future tuberculosis cases through treatment of latent tuberculosis infection [8]. For visas obtained from outside the United States, the procedure requires all applicants >14 years of age to undergo universal chest radiography, and those with findings consistent with tuberculosis must complete treatment if tuberculosis is confirmed by at least one of three required sputum specimens examined by smear for acid-fast bacilli (AFB) or mycobacterial culture [9]. For visas applicants who are residing in the United States on a temporary visa, tuberculosis screening is conducted by different two-stage process beginning with a test for tuberculosis infection followed by chest radiography only among applicants with Mantoux tuberculin skin test (TST) indurations > 5 mm in diameter or a positive interferon-gamma release assay (IGRA) [10]. For students, temporary workers, and other nonimmigrants, no tuberculosis screening is required for entry to the United States, but a two-stage process for tuberculosis screening is recommended [11][12][13]. Although most immigrants undergo screening for tuberculosis by this two-stage process, the outcome and the effectiveness of this approach has been evaluated only in one small study, and never compared to universal chest radiography [14]. Due, in part, to cross-reaction of TST with the bacillus Calmette-Guérin (BCG) vaccine, interpreting TST results and managing the risk of reactivation of LTBI in patients from highprevalence countries are problematic [15][16][17]. In 2000, CDC recommended disregarding prior BCG immunization when interpreting a positive TST [12], but following the approval by the U.S. Food and Drug Administration of two interferon gamma release assays (IGRA) the recommendation were revised in 2010 to state that IGRAs were the preferred test for LTBI in populations likely to have received BCG vaccine [13]. The prevalence of the diagnosis of latent tuberculosis infection is much lower using IGRA compared to TST in many BCGvaccinated populations that are at highest risk for TB, raising questions about the possibility of lower sensitivity of IGRA [13,18,19]. We compared the performance of the TST to an IGRA, QuantiFERON-TB Gold In-Tube test (QFT) among adult U.S. visa applicants undergoing radiographic screening in Vietnam, a country with universal infant BCG vaccination [20] and a tuberculosis prevalence of 334 per 100,000 population [21]. The goals of the study were to measure the sensitivity of TST and QFT in detecting culture-confirmed pulmonary tuberculosis, and to estimate the overall and age-specific prevalence of LTBI for using TST and QFT in the same adult immigrant population. Methods Institutional Review Board approval was obtained from the Centers for Disease Control and Prevention, the Cho Ray Hospital, The Methodist Hospital Research Institute, and the Denver Health and Hospital Authority. All adults provided signed consent; a parent or guardian provided signed consent on behalf of their child ages 2-17; and adolescents 15-17 signed an adolescent assent form which they will sign themselves after parental consent was obtained. Study participants were recruited from December 2008 through January 2010 from Vietnamese visa applicants during the standard immigrant medical examination at the Cho Ray Hospital Medical Visa Unit using the technical instructions published by CDC [9]. Following the results of chest radiograph applicants were invited to participate in a study of TST and QFT for which they would be provided the results, but the result of which would not affect their visa application. Recruitment was done following the posterior-anterior radiograph of applicants age >14 years. First, we attempted to enroll 1,000 applicants with radiographic findings consistent with tuberculosis disease with the goal of including up to 150 with at least one sputum culture positive for Mycobacterium tuberculosis for the analysis of sensitivity [22]. Second, we also sought to enroll 500 applicants with normal chest radiographs to provide a sample large enough to calculate age-specific prevalence rates of LTBI test results. Until enrollment was completed, every applicant with a chest radiograph consistent with tuberculosis was approached for enrollment. Each week, the first available participants with a normal chest radiograph were enrolled to maintain the 2:1 ratio. QFT was performed on the day of enrollment, followed by TST; participants were instructed to return for TST reading in 48 to 72 hours. QFT was administered only once per participant. Purified protein derivative was purchased commercially (5 tuberculin units per 0.1mL, Pasteur institute, Nha Trang, Vietnam). QFT kits were provided by the Foundation for Innovative New Diagnostics. Prior to and at intervals during the study, we conducted onsite training and quality assurance for TST and QFT testing. We used the following materials for processing QFT samples: Biorad 550 plate reader; Biorad PW42 plate washer; Hermle Z513 centrifuge; QFT software version 2.17 (Cellestis); and QFT Kit lots 50511, 50361, and 50441. Pipette calibration was performed annually by a certified vendor. The Cho Ray technician and study quality control person (Ngan) ran QFT assays simultaneously on 20% of every fifth sample during field visits, every 6 months; of the 4-5 samples selected for quality control per visit, all results of quality control sample matched the results of the original sample. TST and QFT laboratory technicians were blinded to clinical results (i.e. chest radiograph, sputum smear and culture). Culture was performed with both solid (Löwenstein-Jensen) and liquid (Mycobacterial Growth Indicator Tube) media (Becton, Dickinson and Company, Franklin Lakes, New Jersey). Sputum specimens were decontaminated as described by Kent and Kubica [23], with one important modification. Due to the high concentration of Pseudomonas aeruginosa, fungi, and mycobacteria other than tuberculosis found in Vietnam, the digesting protocol was modified from 2.0% to 2.5% NaOH [24]. Mycobacterium tuberculosis growth on either media was confirmed by the Gen-Probe (San Diego, USA) Accuprobes assay and conventional biochemical techniques when necessary [23,25,26]. The participants who were enrolled based on chest radiograph were reclassified after the results of sputum cultures into three groups as 1) having a chest radiograph not consistent with TB (Normal-CXR), 2) having a chest radiograph consistent with TB but not culture confirmed (TB-CXR), or 3) having culture-confirmed pulmonary tuberculosis (TB) when M. tuberculosis was isolated from any of the three sputum samples. To estimate the overall and age-specific prevalence of LTBI using TST or QFT, we compared the results of QFT with TST having induration >5 mm (TST-5), >10 mm (TST-10), and >15 mm (TST-15) in each participant group. To measure the sensitivity for culture-confirmed pulmonary tuberculosis, we calculated the percent positive results only among those having culture-confirmed pulmonary tuberculosis (TB). For regression analyses, ages were grouped into 5-year strata beginning at 15 years of age through age 64, and all ages 65 years or older. To estimate the expected percentage testing positive in the entire visa applicant population, we obtained the age-specific tuberculosis status of the entire applicant population during enrollment. We then weighted the study data as a random selection without replacement for those classified as Normal-CXR, TB-CXR, and TB, and summed for each 5-year age group. We estimated the annual percent change for having a chest radiograph consistent with tuberculosis, culture confirmed tuberculosis, and a positive TST or QFT. The annual percent change was calculated as (e β -1)*100, where β was the slope derived from a generalized linear model with a natural log link of individuals aggregated into 5-year age groups. Description of the applicant population and study participants We obtained the tuberculosis classifications of the population of 20,100 visa applicants 15 years of age and older who completed the visa medical exam during the study period ( Figure 1). The mean age was 37.3 years; 17,802 (88.6%) had Normal-CXR, 2,087 (10.4%) had TB-CXR, and 211 (1,040 per 100,000 population) had culture-confirmed pulmonary tuberculosis. The age-specific prevalence of tuberculosis increased with age. The annual percentage increase per year of age was 5.5% [95% confidence interval = 5.2%-5.8%] for a chest radiograph consistent with TB and 2.9% [2.0%-3.8%] for culture-confirmed TB (Figure 1b). We enrolled 1,475 participants 15 years of age or older of whom 479 had Normal-CXR and 996 had a chest radiograph consistent with tuberculosis (Table 1); 100 applicants declined, and 5 did not complete their examination. Of those with an abnormal CXR, 132 (13.3%) were culture-confirmed for tuberculosis (TB) and 864 were not culture confirmed (TB-CXR Culture-confirmed cases were identified on the first sputum sample for 95 (72.0%); 27 (20.4%) additional cases were identified on the second sputum sample; and 10 (7.6%) on the third sputum sample. One or more sputum specimens were culture-positive for non-tuberculous mycobacteria (NTM) in 6 (4.5%) of the TB group and 105 (12.2%) of the TB-CXR group. Of the 111 patients with at least one culture yielding NTM, 82 (73.9%) had only one culture yielding NTM, and only two had more than one NTM culture had at least one positive AFB smear. These findings are most consistent with low-level contamination for NTM rather than lung disease due to NTM. Sensitivity of TST and QFT for TB The sensitivity for detecting culture-confirmed tuberculosis was 86.4% (95% CI = 79.3%-91.7%) for QFT, 89.4% (82.8% -94.1%) for TST-5, 81.1% (73.3%-87.5%) for TST-10, and 52.3% (43.4%-61.0%) for TST-15 (Table 1). These results were significantly different for QFT versus TST-15 (Pearson's chi-squared probability [p]=<0.001) but not for QFT versus TST-5 (p=1) or TST-10 (p=0.12). Compared to those with TB, the prevalence of positive results for each test were much lower for applicants with Normal-CXR and intermediate for those with TB-CXR, some of whom may have had culturenegative tuberculosis, inactive tuberculosis or radiographic abnormalities not due to tuberculosis (Table 1). We also examined the agreement between the tests among the 132 applicants with culture-confirmed tuberculosis (TB) as shown in Figure 2. Both QFT and TST-5 were negative in 8 (6.1%), were both positive in 108 (81.8%), and 6 (4.5%) and 10 (7.6%) applicants were only positive by QFT or TST-5, respectively, reflecting the similar but imperfect sensitivity of QFT and TST for tuberculosis disease. Increasing the threshold for positive TST to 10 mm and 15 mm progressively decreased the number of subjects positive by TST, but decreased the sensitivity of TST compared to QFT. Prevalence of LTBI for TST and QFT To assess performance of the test for LTBI, we compared TST and QFT results among those without culture-confirmed tuberculosis. Compared with the TB group, discordance was similar but greater among the 864 participants in the TB-CXR group ( Figure 2); 18.9% (n=163) with a TST <10 mm had a positive QFT, and 12.7% (n=110) of those with a TST >10 mm had a negative QFT. Among the 479 in the Normal-CXR group, the discordance was even greater between QFT and TST than observed for the two other groups; 9.2% (n=44) with a TST<10 mm had a positive QFT and 24.8% (n=119) of those with a TST >10 mm had a negative QFT. (A plot of QFT versus TST response, Figure S1, is available online). The prevalence of a positive test was dependent on the participant's age with notable differences by test method (Figure 3). For Normal-CXR (Figure 3), the percent test positive for QFT was just above 20% ages 15-19 years followed by an annual percent increase of 2.1% [0.7%-3.4%]. In contrast the prevalence of positive results were nearly 3-fold and 2-fold higher for TST-5 and TST-10, respectively with annual percent increases of 0. When results from each group were weighted to simulate the test results in the applicant population ( Figure 3D) Discussion We evaluated the performance of the QFT assay and TST as criteria for the detection of pulmonary tuberculosis and LTBI in a population of 20 thousand adult Vietnamese visa applicants undergoing universal chest radiography followed by sputum cultures for those with any radiographic findings consistent with active or inactive pulmonary tuberculosis. Neither the TST at the most sensitive (5-mm) cutoff or QFT detected all the culture-positive pulmonary tuberculosis cases detected by the rigorous radiologic and microbiologic screening, detecting 89% and 86% with TST and QFT, respectively. These data suggest that QFT in this and similar high-risk populations is likely to perform as well as TST-5 when used as the initial test during two-stage screening in which a positive test for LTBI precedes chest radiography and sputum cultures is done for those with any radiographic findings consistent with tuberculosis. In addition to similar sensitivity in detection of tuberculosis, two principal findings support the use of QFT over TST for twostage tuberculosis screening in this BCG-vaccinated population. First, for detecting tuberculosis disease, we estimate positive test result for LTBI would lead to radiography of only 37% of the entire population with a positive QFT compared with 72% of those with a positive TST-5 with no difference in case detection. Second, for the goal of recommending treatment for those with a normal chest X-ray and the diagnosis of LTBI, the data in this study suggest that the specificity of QFT is superior to TST-10 Compared with the prevalence of a positive QFT at age 15-19 years of age, the 3fold higher prevalence of a positive TST-5, and the 2-fold higher prevalence of TST-10, is most consistent with a TST reaction due to prior BCG vaccination or exposure to environmental bacteria, leading to an inappropriately high rate of LTBI. The increasing annual rate of QFT positivity (2.05% per year) is more consistent with the annual change prevalence of tuberculosis in the population (2.9%) than the increase for TST-10 positivity (0.75% per year). These findings indirectly support CDC's recommendation [13] that IGRAs are the preferred test for persons likely to have received BCG vaccine. These findings have important implications for the use of IGRAs in tuberculosis control in the United States. In this population, when used instead of TST-5 or TST-10 in a twostage tuberculosis screening process as a precursor to radiography and sputum collection, the use of QFT would detect a similar number of cases but require one-third fewer radiographs (one-half fewer in younger adults), and greatly reduce the number for whom LTBI therapy is recommended. Increasing the TST cutoff to 15 mm, as has been considered to reduce the false-positive rate for TST in some settings [31,32], would also result in fewer test positives but would greatly reduce (-52%) the sensitivity for tuberculosis disease and by inference the sensitivity for LTBI. For tuberculosis disease, the sensitivity of QFT (86%) and TST-10 (81%) was slightly higher in this study than was estimated in a 2007 meta-analysis (pooled sensitivity, QFT=67% [56%-78%], TST-10=72% [50%-95%]) [33], although a 2012 meta-analysis estimated a very similar sensitivity of QFT (89% [87%-91%]) [34]. Nonetheless, this study suggests that tuberculosis cases may be missed by two-stage tuberculosis screening with QFT or TST and this is why screening for TB disease in immigrant populations should also include, at a minimum, chest radiographs based on the results of symptom questionnaires and physical examination. Each year, approximately 500,000 immigrants are screened in the United States with IGRA or TST followed by chest radiograph [10], and 2.5 million newly arriving students, temporary workers, and other non-immigrants are recommended to undergo a similar two-stage screening. This strategy is generally effective, and has an added benefit of detecting latent tuberculosis infection, but may be missing approximately 10-20% of asymptomatic tuberculosis cases compared to the rigorously applied overseas radiography and collection of three sputum cultures from those with any radiographic findings suggestive of tuberculosis. The tuberculosis prevalence in this population was greater than in most published studies assessing QFT [35][36][37]. Compared with a 2010 national tuberculosis prevalence study in Vietnam [38], the high tuberculosis prevalence (1.0%) we report may be attributed to performing three serial sputum cultures instead of one. Our findings are similar to the 1.3% reported in a previous study of applicants in Vietnam, when culture was demonstrated to be approximately 3 times as sensitive as smears [22]. This study has several limitations. One, no acid-fast bacilli sputum smears or cultures were obtained for applicants with chest radiographs not suggestive of tuberculosis. It is unlikely, but possible, that these persons had tuberculosis disease. Two, except for those with tuberculosis disease, the tuberculosis infection status cannot be determined with certainty because there is no gold standard for LTBI detection, and therefore the specificity could not be calculated. Third, the BCG immunization history for applicants was not obtained; we assumed that all those enrolled had been immunized with BCG at birth. Fourth, our study included only one applicant with HIV; therefore, our conclusions for tuberculosis screening should be limited to HIV-negative persons. (CDC requires all HIV-positive immigrant applicants to have sputum testing for M. tuberculosis.) Despite these limitations, this study indicates that QFT is superior to TST in BCG vaccinated populations since it provides equal sensitivity in the commonly-used two-stage screening process for tuberculosis disease and yields fewer positives than TST at 10 mm (only half among adults 15-19 years) suggesting superior specificity for LTBI.
2016-05-04T20:20:58.661Z
2013-12-19T00:00:00.000
{ "year": 2013, "sha1": "343c60dbb2451e25d41a7a17eb729d2b2bdc4056", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0082727&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "343c60dbb2451e25d41a7a17eb729d2b2bdc4056", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
113404151
pes2o/s2orc
v3-fos-license
Modelling the initial expansion of the Neolithic out of Anatolia Computer-based simulations of the Neolithic expansion in Eurasia using the time-space distribution of 14C dates have consistently highlighted a gradient from the Near East to the British Isles (Gkiasta et al. 2003; Pinhasi et al. 2005; Bocquet-Appel et al. 2009; Fort et al. 2012). The underlying assumption that agriculture swept across Europe following the advance of a pioneer front has (if anything) comforted Childe’s and other diffusionists’ accounts of a migration of early culture based on similarities in the pottery and other material remains (Childe 1925; 1950; Elliot Smith 1915[1929]). Clark is widely credited with the first explicit use of radiocarbon dates for modelling Neolithic expansion (Clark 1965). What has changed since Clark, as other authors have pointed out, is not so much the scope as the resolution of the model, which has improved dramatically thanks to the widespread use of 14C dating (Bocquet-Appel et al. 2009.807). Introduction Computer-based simulations of the Neolithic expansion in Eurasia using the time-space distribution of 14 C dates have consistently highlighted a gradient from the Near East to the British Isles (Gkiasta et al. 2003;Pinhasi et al. 2005;Bocquet-Appel et al. 2009;Fort et al. 2012). The underlying assumption that agriculture swept across Europe following the advance of a pioneer front has (if anything) comforted Childe's and other diffusionists' accounts of a migration of early culture based on similarities in the pottery and other material remains (Childe 1925;1950;Elliot Smith 1915[1929). Clark is widely credited with the first explicit use of radiocarbon dates for modelling Neolithic expansion (Clark 1965). What has changed since Clark, as other authors have pointed out, is not so much the scope as the resolution of the model, which has improved dramatically thanks to the widespread use of 14 C dating (Bocquet-Appel et al. 2009.807). The sheer number of published radiocarbon dates is such that we advocate moving a step further, by drawing a regional simulation of the Neolithic dispersal, this time within a moderately small section of Eurasia (c. 1 000 000km 2 ), spanning from the Central Anatolian Plateau in the East to Thessaly in the West, and the Balkan Range in the North (Fig. 1). Sites in Northern Bulgaria and Serbia fall outside the scope of this paper and will not be considered further here. In the article, we use empirical Bayesian kriging to interpolate the advance of the Neolithic from the Anatolian heartland to Southeast Europe. The model relies upon a comprehensive dataset of 71 sites and 1162 uniformly recalibrated dates, falling within the interval 9000-5500 calBC at 2σ. Unlike other simulations of the Neolithic, which use the oldest observed 14 C date(s) as a proxy for the advance of a pioneer front (e.g., Pinhasi et al. 2005;Bocquet-Appel et al. 2009;Fort et al. 2012), our simulation draws upon modelled dates, statistically constrained by prior information using Bayesian clustering. It goes without saying that this approach is feasible only with a small sample of sites, over which strict quality control can be maintained. The central question being asked of the data is whether the spread of the Neolithic out of Anatolia was a linear process, or whether it consisted instead of standstills, punctuated by rapid advances. What is at stake is the potential identification of so-called farming 'frontiers' within the study region similar to the ones identified in the Great Hungarian Plain (Whittle 1996;Zvelebil, Lillie 2000), the southern Adriatic coast (Forenbaher, Miracle 2006), the circum-Baltic region (Whittle 1996;Zvelebil 1998; and the Low Countries (Louwe Kooijmans 2007). The traditional view, held by Ammerman and Caval-li-Sforza, is that farming expanded across Europe at a steady pace of approx. 1 km/year (Ammerman, Cavalli-Sforza 1971;1984.61, 135). This estimate, which has been upheld in recent literature (Pinhasi et al. 2005), is at odds with the archaeological picture outlined above and recent demographic work, which suggests an expansion in 'booms and busts' (Shennan, Edinborough 2007;Shennan et al. 2013). The latter pattern of spread is usually captured under the concept of 'arrhythmic' expansion (Guilaine 2000). By challenging the linear narrative of farming expansion within the study region, we hope to contribute to a growing body of literature which highlights the crucial role of Anatolia not just as a land bridge, but also as an independent centre of neolithisation (Özdogan, Basgelen 1999;Özdogan et al. 2012;Thissen 2000;Gérard, Thissen 2002;Lichter 2005;Gatsov, Schwarzberg 2006;Krauß 2011;Baird 2012;Çilingiroglu 2012). One of the key issues emerging over the years has been the distinction of two Neolithic traditions, one in Central Ana- tolia, running broadly concurrent with Pre-Pottery Neolithic B societies in the Near East, and the other in Western Anatolia, coinciding or shortly pre-dating the widespread adoption of pottery in the Northern Levant (Schoop 2005;Baird 2012;Düring 2013). As this study demonstrates, the advent of farming in Western Anatolia was delayed by up to 2000 calibrated years and this lag in the dating needs to be properly accounted for in future. Dataset and methods A geostatistical (kriging) method was used to interpolate the spatiotemporal advance of the Neolithic from a set of known values. The first step was to obtain the known values from the sample data -a georeferenced dataset of 1162 calibrated radiocarbon dates from 71 sites (Electronic Supplementary Material 1). This number excludes duplicate entries and dates that fall outside the range 9000-5500 calBC at 2σ. For the period under review, 1057 dates were ascribed to Neolithic and Early Chalcolithic levels, 99 to Epipalaeolithic and Mesolithic levels; 6 came from mixed layers or could not be ascribed to a period in particular. A Bayesian model was built for each site where it is possible by using median estimators of phase boundaries in OxCal 4.2. Two versions of the kriging, one including virtually all modelled dates, regardless of quality, the other based on a strictly audited sample, were constructed. In turn, the intensity of the Neolithisation process was evaluated through summed probability distributions of calibrated radiocarbon dates. C data collection, calibration and quality control The radiocarbon database on which this study relies was collated from published literature and existing datasets, including the CalPal-database (Weninger 2014), the CONTEXT database (Böhner, Schyle 2008) and the CANeW (Reingruber, Thissen 2005;Thissen 2006;Gérard, Thissen 2002). Dates were uniformly recalibrated using the IntCal13 atmospheric curve (Reimer et al. 2013) in Oxcal 4.2 (Bronk Ramsey 2013). The consistency of the database was checked for out-of-scope and duplicate entries. In attributing sites or phases to the 'Neolithic', we followed the assessment of the excavators, cross-checking (where possible) the validity of this attribution, based on such criteria as the adoption of food production, e.g., domestic plants and/or animals (Childe 1936). On this basis, three of the 71 sites surveyed did not return any 'Neolithic' dates and were not processed any further. Subsequently, two different approaches were pursued. The first one involved limited pre-sorting, excluding only those radiocarbon determinations reported as problematic by the laboratories. The advantage of this method is that virtually all 14 C dates, regardless of quality, could be included in the model, thus pre-empting biases regarding the way in which the selection was made (see also Brami 2015). One potential problem, however, is that evaluating together dates with small and large error margins arising from several generations of radiocarbon dating places too much emphasis on the latter. As already pointed out elsewhere (Brami, Heyd 2011.173), dates of mainland Greece, which were mainly processed in the 1950s -1970s, have two to four times larger standard deviations on average than dates of Western Turkey, making any comparison problematic at best. The second approach thus incorporated a degree of chronometric hygiene to monitor the quality of the database. A cut-off value of 100 years BP was arbitrarily set for the standard deviation, meaning that 14 C dates with an uncertainty superior or equal to this minimum standard were excluded. Radiocarbon age uncertainty is linked to a variety of factors, not least the resolution of the dating equipment; larger standard deviations may indicate problems with the sample or with the laboratory treatment (Flohr et al. 2015). The problem of 'old wood' effect in charcoal samples was addressed in the following way. First of all, bulk samples, in which carbon of unknown provenience from the sediment is mixed with carbon from the charcoal, were systematically excluded from the audited dataset. Similarly, unidentified charcoal samples which may stem from the inner rings of a tree in which 14 C has started to decay years before the tree was felled or burned were excluded (Zilhão 2001.14181). Finally, long-lived tree charcoal samples from structural timbers such as posts and roof beams which could be reused in successive buildings (Cessford 2001) were flagged out and the corresponding dates discarded. As a result, short-lived materials such as cereal grains, hazelnut shells, bone/antler made up the essential part of the audited dataset. Bone was treated with caution: bone samples from before the introduction of AMS dating (e.g., four UCLA dates from Argissa) were excluded; likewise, burnt bone and bone apatite (Flohr et al. 2015). This approach is also not without problems. Human bones from coastal regions and river valleys may still have a reservoir effect due to human consumption of marine resources. Seeds, on the other hand, are prone to move across the sediment and, conversely, may be too young. Another consequence is that the dataset on which the second kriging simulation was based was significantly reduced, to 280 dates from 26 sites, leading entire regions such as Greece to be interpolated from only a few known sites. In conclusion, each of the two methods of sampling, selective and non-selective has advantages and limitations, but we argue that, taken together, they provide a valuable snapshot of early agricultural expansion out of Anatolia. With regard to the input data that was fed into the kriging, it consisted of exact calendar dates (Tab. 1). Since calibrated dates are always expressed as a possible range between two values, not as a specific point in time, a protocol was followed to artificially determine the most statistically probable starting date of each site (Fig. 2). The method of median estimators of phase boundaries was used (Bronk Ramsey 2009a; see Thissen 2010 for a practical application). In short, a Bayesian model was created for each site in which sufficient stratigraphic and contextual information was available for the units sampled (e.g., chronometric phases based on ceramic evidence). Bayesian modelling narrowed down the statistical interval of the dates using prior information about, inter alia, the relationship of the dates, for instance their belonging to the same stratigraphic phase, or their coming 'before' or 'after' one another. In practice, this was done using the boundary function in OxCal 4.2 (Bronk Ramsey 2013). Outliers' dates, showing poor individual agreement (A< 60%) between the observed data and the model, were identified and down-weighted using the outlier analysis approach described by Bronk Ramsey (2009b). A uniform prior probability of 0.05, corresponding to a 1 in 20 probability of each sample being an outlier, was selected (Bronk Ramsey 2009b.5; see also French, Collins 2015.125). Finally, the median was used as the point estimator for the start phase (Thissen 2010). Kriging interpolation The dispersal of early farming from Central Anatolia to the Southern Balkans was modelled using the kriging technique of spatial interpolation and the 14 C values derived above. The principle of kriging is that, knowing the value of a set of points in space, it is possible to estimate the value of other points for which data is absent. This is based on the measure of spatial autocorrelation, expressed through a variogram. The variogram is a function describing the degree of spatial dependence of a spatial stochastic process (Wackernagel 2003). Its calculation is based on the distances among the available paired observations. A mathematical model can hence be fitted to the experimental variogram and the coeffi- cients of this model can be used for the estimation through the kriging regression (for more information regarding the statistical process, see Cressie, Wikle 2011). Bocquet-Appel and Demars (2000;Bocquet-Appel et al. 2009) applied this method based on the known distribution of 14 C dates on a uniform grid, in order to estimate the advance of a pioneer front within the context of a colonisation process. This method has some limitations; in particular, it is based on an assumption of spatial homogeneity (Krivoruchko 2012;Pilz, Spock 2007). In other words, this technique appears to be very effective when a subjacent trend is found. Fitting the variogram model to the observed data is a delicate process, which influences the parameters of the regression; if a spatial correlation is not evident, the risk of using an unsuitable variogram model is high. For the present research, it is not clear from the outset whether or not the data has a linear distribution, so it is hard to find a good predictor for it with ordinary kriging. However, in many cases, the best predictor can be non-linear: empirical Bayesian kriging is a method for predicting non-linear distributions (Krivoruchko 2012). Empirical Bayesian kriging accounts for the error introduced by automatically drawing the variogram trend from a range of individual trends. The new variogram models are estimated on the basis of the previously simulated data; a weight for each variogram is given using the Bayes' rule, showing how likely the observed data could be generated from this variogram. The result of this procedure is the creation of a spectrum of variograms. The predictive density can be calculated by averaging transformed Gaussian distributions (Pilz, Spock 2007). The variogram for the comprehensive dataset is shown in Figure 3. In order to make the calculation of distances the most accurate possible, the sites are in a metric projection (Universal Transverse Mercator). The values on the x-axis are expressed in metres raised to 10 5 (1 = 100 000m = 100km) and show the distances among the observed points; the y-axis, in turn, shows their semi-variance. The very high variance near the origin indicates a local heterogeneity, added to unavoidable issues related to the 14 C dates themselves (e.g., data quality, dates not belonging to the earliest Neolithic horizon in the region). The low slope of the estimated variograms shows a very low spatial correlation. The variograms for the audited dataset is represented in Figure 4. In this case, the variance at the origin is much lower, and the trend of the simulated variograms shows a higher spatial correlation. Therefore, this dataset appears more appropriate to represent the spread of Neolithic farming. These variograms are inputted in the kriging interpolation model, providing a graphical representation of the possible timing and path of the spread through the use of isochrones, which are boundaries that contain homogenous dates. Summed probability distributions In addition to the kriging, the calibrated probability distributions of all 14 C dates were summed in order to gain an insight into regional population fluctuations. This approach rests on the assumption that the density of radiocarbon dates in the dataset is directly proportional to human activity (Steele 2010). In fact, both research and taphonomic biases are likely to affect the shape of the 14 C frequency distribution. To avoid sites being over-represented in the dataset (e.g., Çatalhöyük East alone accounts for over 19.4% of all accepted 14 C dates in the study region), multiple radiocarbon dates for each site were first summed to a single distribution. These distributions were then summed across four target regions (Fig. 5). Fig. 3. Spectrum of the semivariogram models produced by empirical Bayesian kriging for the comprehensive dataset. Summed probability distributions in this case may not be used as accurate demographic proxies, given that the number of radiocarbon determinations in each region is below the 500-date minimum threshold quoted in the literature (Williams 2012). This approach, we admit, leaves open many issues; in particular, peaks and troughs in the distribution may not necessarily reflect population expansion and decline, but instead the plateaus and wiggles of the calibration curve (Williams 2012.581). The aim here was to detect major regional discrepancies in the dating, in the order of several hundred years; summed probability distributions provide a valuable medium to show just how well certain periods are represented in terms of 14 C date distribution. They provide an additional control layer, showing not just when farming initially took off, but also how this process was sustained over time, once all the dates are taken into consideration. Results The kriging interpolation of the space-time distribution of 14 C dates, whether based on the entire dataset or only a sample thereof, indicates a westward regression of the onset of farming from the Central Anatolian Plateau to the Aegean Basin, followed by a northward shift to inland Thrace and Macedonia. The incremental way in which the isochrones ripple out of Central Anatolia may, we argue, be an artefact of the kriging. Multiple isochrones, at short distances from each other, presumably indicate a standstill or very slow progression. In turn, summed probability distributions of calibrated radiocarbon dates indicate that the advent of farming in Western Anatolia was delayed by up to 2000 calibrated years, supporting the identification of a major chronometric lag between the start of the Neolithic in this region and in Central Anatolia. Figure 6 shows the expansion of the Neolithic, in 250-year isochrones, based on a comprehensive dataset of modelled radiocarbon dates. Compare with Figure 7, which draws on the modelled values of the audited dataset, while sharing the same simulation environment. Both simulations highlight the remarkably early uptake of agricultural production on the Central Anatolian Plateau, which was presumably a major centre of food-plant and animal domestication (Buitenhuis 1997;Asouti, Fairbairn 2002;Martin et al. 2002;Pearson et al. 2007;Arbuckle et al. 2012). Surprisingly, the Pisidian Lake District, which is located at the western end of the Anatolian Plateau, already reflects a much younger tradition. The interpolation shows between two (Fig. 6) and six (Fig. 7) 250-year isochrones between Cappadocia and the Lake District, that is, a little over 200km, in a region which is not characterised by any major topographic boundary. If there was an expansion of the Neolithic towards the west, across the Anatolian Plateau, it was extremely slow-motion, possibly lasting hundreds if not thousands of years. The second kriging simulation, in particular, struggles to interpolate this advance, marked out by not too distant sites showing major discrepancies in corrected start date value, e.g., Asıklı (7934 calBC) and Höyücek (6353 calBC). The kriging produces artificial contour lines to span what is essentially a major lag between two Neolithic regions. In any case, the pattern suggests that agriculture was initially held off in Cappadocia and the Konya Plain, with the 'bond' finally breaking sometime in the 7 th millennium calBC (Düring 2013). Modelling the advance of the agricultural pioneer front This above-outlined view is further supported by the subsequent change in direction of the isochrones, from south to north, rather than from east to west, in the Aegean Basin. Here, the two simulations differ significantly. The first kriging simulation based on the non-audited dataset suggests that the Lake District, together with Knossos in Crete, provided a starting point for the initial spread of the Neolithic into Europe. Once the older dates from Hacılar and Bademagacı are excluded from the dataset, due to their poor quality, the Lake District becomes a potential crossroad between a land-way, from the west across the Anatolian Plateau, and a sea-way to the west, spearheaded by slightly older sites like Çukuriçi Höyük and Ulucak. At present, the chronological differences between the Lake District and the Aegean coast of Anatolia are too small to draw firm conclusions about the existence of this second route. The first kriging simulation highlights a fairly synchronous adoption of agriculture on both sides of the Aegean Basin (Fig. 6). If true, the Aegean Sea probably acted more as a bridge than as a frontier, as also indicated by early dates on the islands of Crete (Knossos), Kythnos (Maroulas) and Gökçeada (Ugurlu). Southern Aegean sites appear to be slightly older than those in the north on average by between c. 500-750 years depending on the simulation, but the distance to cover is much greater, approx. 600km from one end of the Aegean Basin to the other. Once again, differences in the dating are significant but not drastic; they may be explained by other factors, such as a plateau in the calibration curve in the first half of the 7 th millennium calBC, which may influence the simulation (Reingruber, Thissen 2009;Weninger et al. 2014). On the other hand, radiocarbon dates for the Aegean seaboard sites and adjacent regions, like the Thessalian plains, are significantly older than those encountered further inland, particularly in Thrace. Upriver sites in the Struma and Maritsa valleys demonstrate at least one further chronological step in the advance of the Neolithic, with the resulting expansion potentially being driven from west to east rather than from east to west (Lichter 2006). The Central/Western Anatolian farming frontier The rapid and incremental manner in which the interpolated isochrones succeed each other across the Central Anatolian Plateau (Fig. 7) lends support to the idea that agriculture was initially contained within this region, spreading internally to multiple sites and communities before radiating outward (Düring 2013). A regional stasis at the onset of the Neolithic in Anatolia can be represented graphically using summed probability plots (Fig. 8A-D). Notice in Figure 8A the calibrated probability distribution of 14 C dates in Central Anatolia during the interval 8500-7000 calBC. Remarkably, this period is almost entirely unaccounted for in Western Anatolia (Fig. 8B), Fig. 5. Location of the four target regions: A: Central Anatolia (red); B: Western Anatolia (blue); C: Greece (orange); D: Thrace (green). Background map designed by M. Börner. suggesting that the Neolithic started there between 1500-2000 years later (Brami 2015). The peak in distribution after c. 6500 calBC perhaps marks the initial explosion of the Neolithic in the region. As it is barely noticeable in the other graphs, this peak is unlikely to have been artificially created by the calibration process. Within the current dataset, there is no indication that Neolithic expansion in Western Anatolia was preceded by a population crash in Central Anatolia. On the contrary, the two distributions run largely concurrent during the interval 6500-6000 calBC. Further west, the question can be raised as to whether the abandonment of sites in Western Anatolia post c. 5800 calBC coincided with a renewed expansion of the Neolithic into Greece and Thrace. The summed probability distribution for the Greek Neolithic is skewed towards a slightly later horizon (Fig. 8C). Dates spanning between c. 7600-7000 calBC are statistical outliers, which can be firmly discounted (Perlès 2001;Brami, Heyd 2011). They show the inherent risk involved in keeping dates with large standard deviations from old excavations generating background noise, as in this case. Thrace represents a further step in time, with the greater part of the distribution presumably falling outside the study period (Fig. 8D). For reference only, the rate of expansion of the Neolithic for the region under review was measured using the technique described by Ammerman and Cavalli-Sforza (1971). A regression to calculate the rate of expansion, as per the cited article, was performed, using Asıklı as a potential centre of diffusion. The speed implied by the distance-versus-time regression was 0.32 ± 0.11km/year (the range of 0.11 corresponds to the 95% confidence interval), while the time-versus-distance regression returned a much faster diffusion rate, 1.07 ± 0.36km/year (Fig. 9A). The first regression (distance-versus-time) would be preferable if most of the error were due to the dating, while the second (time-versus-distance) if the error were due to the distances. In this case, the distances are exact, so the first regression is of more direct relevance. This approach assumes a linear fit of the regression coefficient. For the present dataset, the correlation coefficient was low, i.e. 0.58 (compare with >0.80 in Pinhasi et al. 2005). This relatively low spatio-temporal correlation is illustrated in figure 9B, where the data distribution appears to be divided into two clusters. Data clusters show the potential lag in Neolithic occupation between Central and Western Anatolia, further undermining the relevance of a linear fit. The case for an arrhythmic model of Neolithic expansion If we assume a linear regression from a hypothetical origin in Asıklı, the rate of expansion of the Neolithic within the study region was very low, 0.32km/year on average (Fig. 9). It was much lower, for instance, (Ammerman, Cavalli-Sforza 1971.681;1984;Pinhasi et al. 2005). Marina Gkiasta et al. have already pointed out that Ammerman and Cavalli-Sforza's average concealed wide regional variations: only 0.7km/year in the Balkans, but a record 5.6km/year in Central Europe (Gkiasta et al. 2003.45;see Ammerman, Cavalli-Sforza 1971. 684). In what follows, we suggest that calculating a mean rate of expansion for the study region is potentially misleading, because it assumes a linear wavedispersal model, which is not consistent with the evidence . From a regional perspective, one indeed observes that linear regression models unduly normalise highly particularised sets of values. Several arguments can be made in support of an arrhythmic model of Neolithic expansion. First of all, the non-uniform distribution of the isochrones in the two kriging simulations and their change of direction over time (east-to-west then south-to-north) are strong indications that farming did not expand in a linear manner, spreading in fits and starts . Furthermore, the incremental way in which the isochrones ripple out of Central Anatolia in the second simulation (Fig. 7) suggests that farming expansion in this region was extremely slow or halted. A long stasis at the outset of the Neolithic on the central Anatolian Plateau has been represented graphically using summed probability distributions (Fig. 8). Data clusters in the age-distance graphs further de-monstrate the existence of a chronometric lag between Central Anatolia and regions further afield (>400km; Fig. 9). The results outlined in this paper are consistent with a previous identification of a 2000-year lag in Neolithic occupation between the central Anatolian Plateau and the Aegean Basin (Brami 2015). Farmers appear to have been initially held off in this region. On account of the summed probability plots, there is no indication that a 'bust' preceded the 'boom', as in other regions of Europe (Shennan et al. 2013). No regional population collapse can be detected in Central Anatolia before c. 6000 calBC (Fig. 8A). On the face of the evidence presented, the idea of a farming frontier crystallising as a result of either a loss of momentum in the Neolithic core or an encounter of resistance in Western Anatolia appears more likely. The 'bond' was finally breached c. 6500 calBC, with a subsequent explosion of sites recorded throughout Western Anatolia (Düring 2013). Limitations of the study Kriging is arguably a powerful technique to interpolate the spread of early farming across Eurasia (Bocquet-Appel et al. 2009). One issue that this paper has sought to address is the assumed linearity of ordinary kriging, which makes the computation of non-linear expansion behaviour, such as an arrhythmic spread in fits and starts, problematic. Where sites on either side of a 'frontier' display widely different values, ordinary kriging breaks down the gap be- tween them into a series of isochrones, essentially imposing linearity where there is none. The method of kriging which was used here, empirical Bayesian kriging addresses this issue by adjusting the simulation at each of the input data locations (Krivoruchko 2012). Although the results obtained with this method indicate an improvement in kriging data with non-stationary covariance structure, the second interpolated map (Fig. 7) still displays an incremental pattern of expansion out of the Central Anatolian Plateau ('ripple' effect). One potential issue with this simulation lies in the number of plotted sites, which at 26 is not high enough to generate an accurate isochrone map. The second kriging simulation possibly lacks in resolution what it makes up for in data quality. Another limitation of the kriging method of interpolation as it has been pursued here is that it operates in a spatially neutral environment, where every section of the map is given equal weighting regardless of its geographic context, i.e. valley bottom, mountain top, sea, etc. This is consistent with previous applications of kriging for modelling the expansion of the Neolithic in Eurasia (Bocquet-Appel et al. 2009). One way forward would be to use the 'best patch' variable (e.g., Bocquet-Appel et al. 2014.63-64). This amounts to grading land according to their agricultural potential. Approaches based on the spatiotemporal distribution of 14 C dates are helpful to describe a geographic spread, less so to analyse or explain it. The models presented in this paper do not take into account a multitude of variables which may have influenced early farmers. A different approach, which estimates climatic variables and their effect on the landscape as well as the socio-economic systems and demographic structure, is agent-based modelling. This holistic approach, which brings in data from different disciplines (economy, anthropology, ethnography, paleo-climatology), has been recently introduced in archaeology, allowing one to test scenarios that could not be inferred from purely archaeological observations (Axtell et al. 2002;Kohler et al. 2007;Janssen 2009;Bocquet-Appel et al. 2015). Conclusion This article has established, through a suite of geostatistical and graphical simulations, that the advance of the Neolithic from the Anatolian heartland to Southeast Europe involved at least two distinct stages. Farming was initially held off on the central Anatolian Plateau. Up to 2000 calibrated years were necessary to bridge the chronometric lag between Central and Western Anatolia (Brami 2015). Once early farming finally spread into the Southwest Anatolian Lakes Region and the Aegean Basin, shortly before c. 6500 calBC, it rapidly made its way north, reaching Eastern Thrace c. 6000 calBC. The pattern of spread described in this paper is consistent with an arrhythmic model of diffusion, involving major standstills (or 'arrhythmic phases')i.e. the Central/ Western Anatolian farming frontier -punctuated by rapid and/or regular advances in the Aegean Basin and the Southern Balkans (Guilaine 2000.268-270). Moreover, this paper has demonstrated that linear regression models, such as the 'wave of advance' (Ammerman, Cavalli Sforza 1971;1984), virtually conceal strong regional variations in the data by normalising them. While these approaches may be useful on the scale of Eurasia to describe the overall pattern and direction of spread, moving one scale down, the reader can see that they fail to reflect the fits and starts of the process, in this case the crystallization of boundary or frontier zones, which preceded the ultimate explosion of farming communities c. 6500 calBC. The use of Bayesian clustering alongside the kriging helped to further sharpen the resolution of the model. Two kriging simulations were presented, one based on virtually all calibrat-ed 14 C dates, the other on a strictly audited sample. Together they provided a valuable picture of the Neolithic expansion out of Anatolia, as evidenced by the time-space distribution of 14 C dates. Finally, summed probability plots were used to show regional population fluctuations past the initial expansion of the Neolithic. M. Brami is the recipient of a post-doctoral fellowship of the National Research Fund, Luxembourg (AFR project no 9198128). A. Zanotti is a BEAN researcher (Bridging the European and Anatolian Neolithic). Thanks are owed to Barbara Horejs, Bernhard Weninger and to members of the Anatolian Aegean Prehistoric Phenomena (AAPP) research group at OREA for helpful information and comments. The responsibility for any shortcomings remains with the authors.
2019-01-02T16:21:04.557Z
2015-12-17T00:00:00.000
{ "year": 2015, "sha1": "0075dda7c075397ecfafec44691feed57413e717", "oa_license": "CCBYSA", "oa_url": "https://revije.ff.uni-lj.si/DocumentaPraehistorica/article/download/42.6/5044", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0075dda7c075397ecfafec44691feed57413e717", "s2fieldsofstudy": [ "History", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
256727192
pes2o/s2orc
v3-fos-license
Emergence and intensification of dairying in the Caucasus and Eurasian steppes Archaeological and archaeogenetic evidence points to the Pontic–Caspian steppe zone between the Caucasus and the Black Sea as the crucible from which the earliest steppe pastoralist societies arose and spread, ultimately influencing populations from Europe to Inner Asia. However, little is known about their economic foundations and the factors that may have contributed to their extensive mobility. Here, we investigate dietary proteins within the dental calculus proteomes of 45 individuals spanning the Neolithic to Greco-Roman periods in the Pontic–Caspian Steppe and neighbouring South Caucasus, Oka–Volga–Don and East Urals regions. We find that sheep dairying accompanies the earliest forms of Eneolithic pastoralism in the North Caucasus. During the fourth millennium bc, Maykop and early Yamnaya populations also focused dairying exclusively on sheep while reserving cattle for traction and other purposes. We observe a breakdown in livestock specialization and an economic diversification of dairy herds coinciding with aridification during the subsequent late Yamnaya and North Caucasus Culture phases, followed by severe climate deterioration during the Catacomb and Lola periods. The need for additional pastures to support these herds may have driven the heightened mobility of the Middle and Late Bronze Age periods. Following a hiatus of more than 500 years, the North Caucasian steppe was repopulated by Early Iron Age societies with a broad mobile dairy economy, including a new focus on horse milking. Milk proteins from the North Caucasus and Eurasian steppe support the initial development of sheep dairying during the Eneolithic, followed by subsequent intensification and husbandry of different dairy animals during the Middle Bronze Age and later periods. to ca. 3000 bc with the appearance of mobile steppe herders associated with the Early Bronze Age Afanasievo culture 2 , a group with close genetic and cultural ties to pastoralists on the Pontic-Caspian steppe, most notably the Yamnaya culture (ca. 3300-2500 bc) [20][21][22][23] . Populations from the Pontic-Caspian steppe are also linked to Late Neolithic and Bronze Age westward expansions, including the emergence of the Corded Ware (2900-2200 bc) and Bell Beaker (2750-1800 bc) phenomena in Europe [24][25][26][27] . Understanding the population and economic history of the Pontic-Caspian steppe, the source region for these continental-scale expansions during the third millennium bc, is critical for revealing the main factors that drove the heightened mobility of Eneolithic and Early Bronze Age pastoralists in Eurasia. When Pontic-Caspian steppe populations first began dairying and how their animal management strategies may have influenced their mobility and subsequent migrations remain poorly known. From the Mesolithic through the Eneolithic, populations living in the southern Russian plain and Caucasus region primarily hunted local wild game, which included aurochs (Bos primigenius), saiga antelope (Saiga tatarica), red deer (Cervus elaphus), tarpan (Equus ferus), onager (Equus hemionus) and wild boar (Sus scrofa), as well as birds, fish and molluscs [28][29][30][31] . Animal husbandry of domesticated sheep (Ovis aries), goats (Capra hircus), cattle (Bos taurus) and pigs (Sus scrofa) spread to the North Caucasian steppe from Anatolia during the fifth millennium bc by either a circum-Pontic route 28 or by crossing the Caucasus mountains from the south [32][33][34][35] . By the mid-fifth millennium bc, agropastoralists of the Cucuteni-Trypillia culture in Ukraine were regularly interacting with steppe populations north of the Black Sea 36 , and Eneolithic populations genetically related to South Caucasian and Anatolian agropastoralist groups had become established in the North Caucasus piedmont steppe 32,33,37 and were part of a broader Mesopotamian interaction sphere 38,39 . After the introduction of animal husbandry to the region, Bronze Age steppe populations innovated a new economic system of mobile pastoralism focused on sheep and cattle 40 , and settlements became effectively absent on the steppe for the next two millennia 40,41 . This new, more mobile form of pastoralism is first evident among Steppe Late Maykop groups (3500-2900 bc), who fall broadly within the Late Maykop cultural sphere but are genetically distinct from their higher-elevation counterparts 33 , and fully mobile pastoralism subsequently became the predominant subsistence strategy on the steppe with the Yamnaya culture (3,300-2,500 bc) 41 . Horse domestication occurred during the third millennium on the Pontic-Caspian steppe 42,43 , and, by the late third and early second millennium bc, domestic horses were increasingly part of the steppe mobile pastoralist economy 44 and had even spread to Anatolia and Mesopotamia through Pontic-Caspian-Transcaucasian interaction networks 45 . Mobile pastoralism continued among the Catacomb (2800-2200 bc) and North Caucasus Culture (NCC; 2800-2400 bc) groups in the steppe until worsening climatic conditions and aridification ca. 2300-2200 bc, in association with the 4.2 kya climate event 46,47 , ultimately led to an abandonment of the steppe region by 1700 bc 40,41 . Despite their cultural differences, recent palaeogenomic analysis has shown that these Bronze Age steppe populations were genetically highly similar 33 , which may, in part, reflect their mobile lifestyles and persistent multicultural interactions over millennia 40 . Throughout the Pontic-Caspian steppe, sheep, goat and cattle dominate most studied steppe archaeofaunal collections from the fourth to second millennia bc 41,48,49 . Wheeled transport in the form of wagons first appears in kurgans (burial mounds) of the Steppe Late Maykop in the second half of the fourth millennium bc 50 , and such technology is argued to be essential for enabling the household mobility required for mobile pastoralism 40 . Oxen teams dated to the same period and, later, horses and chariots in the second millennium bc, further facilitated mobility 51 . Sheep wool was present in the North Caucasus by the early third millennium bc, possibly having originated in Anatolia, and the use of wool subsequently spread across the steppe and into Inner Asia during the second millennium bc 52 . Among the region's major secondary products, dairying is argued to have possibly emerged first 50 , in part because dairying was already well established in both Anatolia and surrounding regions by the sixth millennium bc 6,53-55 , whereas evidence for traction and wool are only attested millennia later. Nevertheless, current evidence for early dairying in the Pontic-Caspian steppe is, until now, only attested on its eastern fringes 7 . Previous isotopic studies have been unable to identify clear indications of dairy consumption, finding instead non-specific evidence for high consumption of animal protein and a highly complex isoscape, reflecting both ecological diversity and temporal climatic shifts 41,48,56 . However, the isotopic data suggest a stronger contribution of sheep or goat products to the human diet than those from cattle 41 . Few zooarchaeological studies have systematically investigated herd management and mortality profiles in the region, but the earliest agropastoralist communities in the North Caucasus piedmont steppe were not thought to have engaged in dairying 49 . Likewise, there are few indications of animal management for milk production among Neolithic agropastoralist communities in the South Caucasus 42 . Rather, it is only in the second millennium bc that zooarchaeological studies from Late Bronze Age settlements in the Caucasus have found clear evidence for the deliberate keeping of sheep for milk production 57,58 , and it is only later during the Iron Age that cattle show mortality profiles consistent with dairying 59 . The absence of settlements on the steppe and the near-exclusive archaeological focus on mortuary contexts have made it difficult to reconstruct the nature and extent of dairying in the wider North Caucasian pastoralist economy. In this article, we apply high-resolution tandem mass spectrometry to human dental calculus from 45 individuals at 29 sites in the North Caucasus (n = 27) and the neighbouring South Caucasus (n = 9), Oka-Volga-Don (n = 7) and East Urals (n = 2) regions ( Fig. 1a,b, Supplementary Data 1 and Supplementary Information) to identify evidence of consumed dairy proteins in populations spanning the Neolithic to the Greco-Roman periods (ca. 6000 bc to 200 ad). We find that dairy products were consumed in the North Caucasus from the late fifth millennium bc onwards and that a dairy-inclusive subsistence characterizes even the Eneolithic populations in the piedmont and steppe zones. Dairy consumption was prevalent for all analysed periods and ecotones in the North Caucasus, with milk proteins identified in 26 of 27 tested individuals. We identify an initial, near-exclusive dairying focus on sheep among the Maykop, Steppe Maykop and early Yamnaya, followed by diversification within the late Yamnaya, NCC and Catacomb cultures during the Middle Bronze Age to additionally incorporate goat and cattle milking. Individuals are organized by region, with archaeological culture or period indicated by colour corresponding to the legend. White circles indicate median calibrated radiocarbon dates, and error bars are 2 s.d. Coloured bars display the time spans conventionally associated with the archaeological cultures and time periods. c, Milk protein evidence by individual, displayed as total PSM count to the milk proteins BLG, alpha-lactalbumin and alpha-S1-casein. Consensus livestock assignment was determined by parsimony. a Two dental calculus samples were analysed from ZO2002. Basemap is from https://www.naturalearthdata.com/. Later, during the Early Iron Age, we observe direct evidence of horse milk consumption in association with pre-Scythian groups repopulating the steppe after a centuries-long hiatus. In the South Caucasus, we identify evidence of cattle milking (ca. 3700 bc) nearly 1,000 years before we first observe it in the North Caucasus (ca. 2700 bc), and, in the Oka-Volga-Don region, we observe Results Milk proteins were identified in 34 of 45 analysed individuals across all time periods (Fig. 1c and Supplementary Data 1). Protein recovery in 31 individuals was sufficient to allow the identification of major ruminant livestock milks from sheep (Ovis), goat (Capra) and/or cattle (Bos/Bovinae), whereas the milk proteins of three individuals were represented by non-specific bovid peptides, indicating either sheep or cattle. Additionally, one individual had taxonomically distinctive peptide spectral matches (PSMs) to Equus milk proteins. Beta-lactoglobulin (BLG), which was detected for all dairy livestock (Fig. 2), was the most prevalent and abundant milk protein detected, a pattern consistent with previous studies of dental calculus 2,6,60 . In addition to BLG, which was identified in all 34 milk-positive individuals, we also identified the whey protein alpha-lactalbumin in two individuals and the a, Overall, most BLG sequences were highly conserved among bovids (left) but distinct from equids (right). Spectra originate from AY2005 and MK5018. b, Among bovids, the BLG C-terminus peptide distinguishes caprines (left) and bovines (right). Spectra originate from VS2001 and VS2001. c, The most frequently observed peptide reliably distinguishes Ovis (upper left), Capra (lower left) and Equus (lower right) but cannot distinguish Ovis and Bovinae due to the ambiguity of the sixth residue, which may be aspartic acid (Bovinae) or deamidated asparagine (Ovis) 6 (upper right). Spectra originate from KUG007, RK4002, VS2001 and MK5018. The b-and y-ion series is shown at the top left of each spectrum, and taxonomically informative residues within the peptide sequence are highlighted in pink. A comprehensive list of all identified PSMs and taxonomic assignments is provided in Supplementary Data 3. curd protein alpha-S1-casein in two individuals. All dental calculus samples yielded proteomes consistent with an oral microbiome profile, and age-associated N/Q deamidation was a top modification across the dataset ( 3a,b). At the start of the early third millennium bc, we identified a broad shift in pastoralist practices towards more diversified dairying based on sheep, goat and cattle milk (Fig. 3c,d). Milk proteins from these three ruminant species were identified among individuals associated with the late Yamnaya (ca. 2850-2500 bc; n = 1), NCC (ca. 2800-2400 bc; n = 4), Catacomb (ca. 2800-2400 bc; n = 1), late NCC (ca. 2200-1650 bc, n = 1) and Lola/post-Catacomb (ca. South Caucasus. In the South Caucasus, we analysed dietary proteins within the dental calculus proteomes of nine individuals dating from the Neolithic to Greco-Roman periods and identified milk proteins in half of the analysed individuals ( Fig. 1c and Supplementary Data 3). No milk proteins were detected in the earliest individual, MTT001, dated to 5879-5562 bc from the Neolithic site of Mentesh Tepe associated with the Shomutepe-Shulaveri Culture. However, milk proteins were detected from the fourth millennium bc onwards in individuals dating to the Chalcolithic at Alkhantepe (n = 1), the Middle Bronze Age at Qızqala (n = 2), the Iron Age at Göytepe (n = 1) and the Greco-Roman period at Qabala (n = 1). Unlike in the North Caucasus, we did not observe an early focus on sheep dairying; rather, the earliest detected milk protein, identified in individual ALX002 dating to 3776-3651 bc, was assigned to cattle (Bovinae). Overall, we identified cattle (Bovinae), goat (Capra) and sheep (Ovis) milk protein in the South Caucasus but no horse (Equus) milk in any period there (Fig. 3). Oka-Volga-Don region. In the Oka-Volga-Don region, we analysed dietary proteins within the dental calculus proteomes of seven individuals dating from the Eneolithic through the Middle Bronze Age ( Fig. 1c and Age individuals from the Shagara cemetery, dating to 2572-1893 bc. Only an individual at the site of Rovenka tested positive for milk proteins. This individual, RVK001, was associated with a late Catacomb culture site, dating to 2339-2148 bc, and was positive for sheep (Ovis), goat (Capra) and cattle (Bovinae) milk proteins (Fig. 1c). East Urals region. We analysed two individuals from the East Urals region at the site of Neplyuyevka associated with the Late Bronze Age Srubnaya-Alakul cultural variant and dating to ca. 1900-1600 bc (Fig. 1c). We detected milk proteins for both individuals, identifying sheep (Ovis) and cattle (Bovinae) peptide sequences for NEP008 and non-specific bovid peptide sequences indicating either sheep or cattle for NEP013 (Supplementary Data 3). Discussion Eneolithic populations practiced dairy pastoralism. Our results provide robust evidence that sheep dairying was practiced among fifth millennium bc Eneolithic groups in the North Caucasus piedmont and steppe zones. This finding resolves long-standing questions about the antiquity of dairying in the North Caucasus 47 , as well as the species focus of early dairy herds, and it contributes to a growing body of evidence that dairying was likely introduced with domesticated livestock into the North Caucasus from the south. Recent palaeogenomic studies identified a genetic cline connecting Neolithic populations in eastern Anatolia and the South Caucasus that likely formed as early as 6500 bc 32 , and continued population interaction into the Chalcolithic and Early Bronze Age periods (5500-3000 bc) suggests that these regions maintained close contact, with animal husbandry focused on pigs and ruminants also spreading via this corridor 61,62 . Early agropastoralists living in the northern Caucasus foothills associated with the Darkveti-Meshoko Eneolithic culture (ca. 4500-4000 bc) have a clear genetic connection to populations south of the Caucasus exhibiting Anatolian ancestry 33 , suggesting a trans-Caucasian population expansion. Although it has been speculated that dairying may have spread to the North Caucasus via these southern connections 50 , few systematic zooarchaeological studies have been conducted, and the Eneolithic/Chalcolithic layers at the piedmont site of Meshoko Cave, which are among the best studied for the period 49 , have yielded limited faunal remains, primarily of pigs, sheep, goats and cattle slaughtered at various ages. Subsequent attempts to clarify the agropastoralist economy using stable isotope analysis 41,48 have yielded equivocal results as to whether dairying was an Eneolithic or Bronze Age innovation in the North Caucasus. Here, through the identification of taxonomically informative peptides from the milk-specific protein BLG, we confirmed sheep milk consumption by Eneolithic individuals at the sites of Progress 2 and Kurganny 1. Notably, we found that dairy consumption was evident among individuals lacking Anatolian ancestry, such as PG2001 33 , demonstrating that the adoption of dairying by North Caucasian transitional foragers was already underway during the late fifth millennium bc, which precedes Yamnaya expansions by a millennium. Maykop and Steppe Maykop dairy focused on sheep not cattle. With the start of the fourth millennium bc, we found a continued reliance on dairy pastoralism revealed by ubiquitous evidence of milk consumption among all tested Maykop and Steppe Maykop individuals, further clarifying the high dependence of these groups on animal products 41,47 . Surprisingly, however, the dairy economy retained an apparent focus on sheep. Although sheep are known to have been important livestock for these groups 40,47 , cattle feature more prominently at Maykop mortuary sites. They are modelled into gold and silver figurines 63 , and an emphasis on the power and mobility of cattle is visible in funerary offerings of cattle crania, yokes and nose rings representing oxen teams 50 . Cattle also appear in bone assemblages at Maykop settlements 49 . The perishability of the major sheep secondary products of milk and wool, in contrast to the high visibility of cattle-associated material culture and skeletal remains, may have contributed to a biased understanding of the relative importance and roles of these livestock at Early Bronze Age sites 64 . Our results suggest that cattle were not important dairy livestock during this period and that there was probably a sharp division in livestock use among the Maykop and Steppe Maykop groups 41 , with sheep being the primary targets of dairying and cattle mainly being used for traction and as a signifier of social identity and status. Dairy livestock diversified during Middle Bronze Age. A change in dairying strategy to focus on more livestock species coincides with the Yamnaya horizon. Following the Maykop period, mobility expanded ever further with Yamnaya groups, who became the first permanently mobile pastoralists 17,44,65,66 . Although two early Yamnaya individuals analysed here yielded evidence of only sheep milk product consumption, a more diversified profile comprising sheep, goat and cattle milk was observed for a late Yamnaya individual at the site of Zolotarevka (ZO2002). This trend towards reliance on a broader range of dairy livestock continued and intensified during the Middle Bronze Age, when we observed a general diversification of pastoralist diets to include sheep, goat and cattle milk routinely. Most individuals of the Middle Bronze Age Catacomb, NCC, Late NCC and Lola cultures tested in this study consumed the dairy products of two or three livestock species. Palaeoecological studies have indicated that climate began to shift during the late Yamnaya phase, which also coincided with the first appearance of the Catacomb and NCC groups 48 . Before this, the climate experienced by the Maykop, Steppe Maykop and early Yamnaya was more favourable 67,68 and conducive to regular, short-distance annual mobility 47,48 . Subsequent aridification encouraged increased mobility, resulting in the exploitation of a wider range of steppe environments beyond the traditional Yamnaya cultural sphere to support livestock herds 40,48 . The shift to more diverse dairy herds in the North Caucasus also overlaps in time with Yamnaya expansions into southeastern Europe, as well as the parallel rise and expansion of the Corded Ware complex across northeastern and central Europe 27 , suggesting that these events may be related to broader changes occurring within steppe and forest-steppe pastoralist societies at the time. Our results suggest that an initial diversification of production strategies to include sheep, goat and cattle milk may have functioned as an adaptation to an increasingly turbulent ecological setting, but this subsequently led to overgrazing and lasting damage to pastures due to ground compaction, soil nutrient loss and decreasing plant biomass 48,69 . At the end of the third millennium bc, coinciding with the emergence of the Lola culture, an intensified drought caused deflation and salinization of the soils in the already dwindling regional watersheds 40,69 . During the Lola period, water-demanding cattle may have decreased in dairying importance from the preceding Catacomb and NCC periods, as only one of six Lola individuals yielded evidence for cattle milk consumption. After 1700 bc, the steppe and piedmont zones of the Northern Caucasus appear to have been largely depopulated until the ninth or eighth century bc 57,70,71 , whereas pastoralist groups continued to occupy the high plateaus of the Caucasus Mountains 72 . Post-Bronze Age adoption of horse milking. In our dataset, we found no evidence of horse milk consumption until the ninth century bc, when Early Iron Age groups repopulated the North Caucasus steppe and piedmont zones 33,41 . Horses are well adapted to steppe environments, and recent palaeogenomic research has identified the lower Don-Volga region, possibly as early as the mid-sixth millennium bc, as the domestication centre of the DOM2 horses that characterize present-day lineages 43,45 . From the Pleistocene until the Bronze Age, horses were hunted on the Pontic-Caspian steppe and have long been symbolically represented in figurines and ritual deposits 28,73 . Horses are also useful for steppe pastoralists because of their digging (tebenevka) reflex, which allows them to graze through thick snow deposits, thereby opening up winter pasture for ruminants 48,74,75 . In the North Caucasus, skeletal remains of the ancestors of DOM2 horses are sporadically found in steppe kurgans from the Late Maykop period onwards 43 , but the role of horses in these pastoralist societies is unclear. The first undisputed evidence of horse traction dates to ca. 2000 bc at the site of Sintashta east of the Urals, where elaborate horse chariot burials have been found in Middle and Late Bronze Age kurgans 51,76,77 . Earlier Bronze Age wagons, such as those associated with the Late Maykop, Yamnaya and Catacomb cultures, had been pulled by oxen teams 50 . Herding on horseback, which may have begun ca. 2200 bc with the selection of traits suitable for riding 43 , would have enabled individual pastoralists to control more livestock at one time and to access pastures across a wider area 75 . Later, horses became particularly prominent in the archaeological record of Early Iron Age Scythians and Sarmatians, who used horses for cavalry 78,79 . In addition to traction and riding, horses can also be exploited for milk, which is traditionally fermented to produce an alcoholic beverage in contemporary Eurasian steppe cultures 80,81 . However, the origin of horse milking is not known. Isotopic evidence from lipids in pottery suggests that Przewalski's horses, reflecting a separate domestication lineage (DOM1) 76 , may have been milked as early as the mid-fourth millennium bc at the site of Botai in northern Kazakhstan 76,82 . It is unclear what, if any, influence early milking at Botai had on the management of DOM2 lineages, the ancestors of modern domestic horses. Currently, the earliest proteomic evidence of horse milk consumption comes from two individuals with problematic dates at the Bronze Age site of Kriviyansky IX in the Lower Don region 7 and, later, at the Late Bronze Age site of Uliastai Dood Denzh located in Mongolia, where the dental calculus of an individual dated to ca. 1200 bc with Sintashta-related ancestry yielded evidence of horse milk proteins 2,20 . Despite an apparent early presence of horse milking at Kriviyansky IX, dating to the third, or possibly fourth, millennium bc, we found no other evidence of horse milking in the North Caucasus region during the Early, Middle or Late Bronze Age. Rather, its late appearance in our dataset suggests that horse milking was a highly limited activity while diverse domestication pathways unfolded, and horses were used for various purposes. Horse milking may have been permanently established in the northern Caucasus only after a later reintroduction by pre-Scythian groups during the first millennium bc. Greek texts, such as The Iliad, later referred to these pre-Scythian steppe nomads as horse milk drinkers 83 . Macroregional perspectives on the spread of dairying. The Pontic-Caspian steppe has long been recognized as a major centre for pastoralist innovation. Here we show that dairying was an early and enduring feature of the pastoralist economy not only in the Northern Caucasus, but also in the South Caucasus. In our dataset, we observed the earliest evidence of milk consumption in the South Caucasus at Alkhantepe, a Late Chalcolithic site with Leilatepe ceramics 84,85 . The contemporaneous Leilatepe and Early Maykop cultures share many features 39,86 , but we found that the agropastoralists at Alkhantepe were milking cattle, whereas we observed only sheep milking at Early and Late Maykop sites in the north. Sheep and cattle have different ecological needs, and, in particular, sheep require less water and can survive harsher winters than cattle. As such, environmental factors may have played a role in influencing the selection of dairy livestock in these two regions. During the third millennium bc, it is known that the economic importance of pastoralism increased in the South Caucasus, especially during the Kura-Araxes period 56,87 , but we did not have corresponding samples to examine this. Although steppe cultural elements, such as kurgans (burial mounds), had been present in the South Caucasus since the Late Chalcolithic 88 , kurgans greatly increased during the Middle Bronze Age 89 , and we next observed dairy product consumption at the Middle Bronze Age fortified agropastoral site of Qızqala, with ruminant dairy proteins present in both individuals analysed for this study. Although Middle Bronze Age cultures in both the North and South Caucasus largely became fully mobile to support their herds 90 , the inhabitants of Qızqala relied on a more flexible subsistence strategy that included both settlement occupation and seasonal movement of livestock 89,91 . Our results show a reliance on dairy technology for subsistence for these mobile pastoralists. Next, we found evidence of sheep milk consumption by one individual from an intrusive Late Bronze/Early Iron Age burial associated with the Khojaly-Gadabay culture at the Neolithic site of Göytepe. This is the earliest unequivocal evidence of sheep milking in our South Caucasus dataset. Later, during the Greco-Roman era, we observe evidence of sheep, goat and cattle milk at Qabala, a site associated with complex and intensive agriculture as well as with local herding. Despite cultural interaction with adjacent communities of the Pontic-Caspian steppe, communities in the Oka-Volga-Don forested regions maintained economies based on hunting, gathering and fishing that were particularly suited to local ecozones. Stable isotope studies suggest that this was the prevailing economic strategy until the end of the third millennium bc during the Middle Bronze Age 48,92,93 when Oka-Volga-Don communities transitioned to agropastoralist subsistence 44 . Although populations further to the east, between the Volga River and the Ural Mountains, practiced ruminant dairying from ca. 3000 bc onwards 7 , the near-complete lack of evidence for ruminant milk consumption from the seven individuals representing the Oka-Volga-Don region in our study is consistent with a late introduction of ruminant dairying west of the Volga, despite the fact that domesticated animals were introduced in small quantities during the late fourth millennium bc. Here, only one Catacomb-associated individual with cultural links to the steppe zone, recovered from the site of Rovenka, yielded ruminant milk proteins, which were sourced from sheep, goats and cattle. In parallel to the expansion of pastoralism to the forest-steppe zone, contact and admixture with late farming groups in eastern Europe, such as Cucuteni-Trypillia and Globular Amphora, resulted in a mixed form of agropastoralism with heavy reliance on pastoralism 94 , followed by a subsequent eastward expansion of the Corded Ware complex during the third and early second millennia bc, which is also attested by archaeogenetic data 22,95 . This sphere of influence includes Fatyanovo/Balanovo and subsequent Abashevo, Sintashta, Andronovo, and Srubnaya groups 94 , and individuals associated with these cultures share very similar genetic profiles. We analysed two individuals linked to the Srubnaya culture at the Middle to Late Bronze Age site of Neplyuyevka in the region east of the Ural Mountains and identified evidence of ruminant milk consumption. Future work combining palaeogenomic and palaeodietary research could help to better clarify the relationships between these populations and the nature and spatio-temporal patterning of dairy technologies in this region. Conclusion Proteomic analysis of human dental calculus has revealed a dynamic trajectory of dairy pastoralism in the North Caucasus steppe and adjacent regions from the Eneolithic to the Greco-Roman periods. Dairying was integral for the spread of animal husbandry by groups crossing the Caucasus mountains from south to north during the Eneolithic, and it was quickly adopted and further developed into an effective and sustainable technology-dairy pastoralism-by neighbouring steppe communities. This innovation forms the basis of the Eurasian steppe lifestyle that continues until today. Initial pastoralist strategies focused on sheep dairying and cattle traction, whereas fully mobile pastoralism arose for the first time during the Yamnaya period. Deteriorating climatic conditions challenged steppe herders during the Middle and Late Bronze Ages, who responded by diversifying their set of dairying livestock and expanding their herding range, until the steppe was ultimately abandoned in the mid-second millennium bc. Later, following a centuries-long hiatus, the steppe was repopulated by Early Iron Age pastoralists who practised horse milking. The turbulent third millennium bc, during which vast stretches of Eurasia experienced social and demographic upheaval, is now coming into sharper focus. Climatic pressures and the needs of dairy herds altered how pastoralists used the North Caucasus steppe and may have contributed to the heightened mobility of third-millennium-bc steppe herders, whose descendants spread across Eurasia within the span of only a few centuries. Future research on the genomes of ancient dairying livestock and additional dental calculus proteomes from adjacent steppe populations north of the Black Sea and east of the Urals will help to further clarify the origins and dispersals of dairying breeds and practices that promoted the lasting cultural and subsistence traditions that reshaped the Eurasian steppe zone and profoundly transformed the Bronze Age Eurasian world. Methods Sampling. Dental calculus sampling was performed on site at archaeological institutions and museums and in a dedicated ancient biomolecules laboratory at the Max Planck Institute for the Science of Human History (MPI-SHH). Disposable nitrile gloves were worn during collection, and calculus was sampled using dental curettes that were replaced or cleaned with isopropanol between samples. Calculus was collected onto weighing paper and stored in microcentrifuge tubes. Samples were further analysed at the MPI-SHH ancient proteomics laboratory, where they were weighed and subsampled before protein extraction. Approximately 5-13 mg of dental calculus was used for each protein analysis. Radiocarbon dating. A total of 24 new radiocarbon dates were obtained by accelerator mass spectrometry of bone and tooth material at: the Curt-Engelhorn-Zentrum Archäometrie in Mannheim, Germany; the Finnish Museum of Natural History (Hela) in Helsinki, Finland; the Oxford Radiocarbon Accelerator Unit in Oxford, United Kingdom; and the Russian Academy of Sciences in Moscow, Russia. Uncalibrated dates were successfully obtained for all but one tested sample (Supplementary Data 1). An additional 21 previously published radiocarbon dates for individuals in this study were also compiled and analysed, making the total number of directly dated individuals in this study 38 (45 total dates). Dates were calibrated using OxCal v.4.4 96 with the IntCal20 atmospheric curve 97 . Liquid chromatography-tandem mass spectrometry and data analysis. Archaeological dental calculus samples from 45 individuals and 5 extraction non-template controls were processed using a filter-aided sample-preparation protocol, modified for ancient proteins (https://doi.org/10.17504/protocols. io.7vwhn7e). In brief, dental calculus was demineralized in 0.5 M EDTA, and proteins were solubilized and reduced using SDS lysis buffer (4% SDS, 0.1 M DTT, 0.1 M Tris HCl). Buffer exchange in 8 M urea and total protein isolation were performed using a Microcon 30 kDa centrifugal filter unit with an Ultracel-30 membrane (Millipore), followed by alkylation using iodoacetamide. Following buffer replacement with triethylammonium bicarbonate (TEAB; 0.05 M), the proteins were digested overnight with sequencing-grade modified trypsin (Promega) at 37 °C. Peptides were recovered by centrifugation in TEAB and acidified with trifluoroacetic acid to pH <3 and desalted using C18 stage tips (Pierce). Peptides were analysed by liquid chromatography-tandem mass spectrometry using a Q-Exactive mass spectrometer (Thermo Fisher Scientific) coupled to an ACQUITY UPLC M-Class system (Waters AG) at the Functional Genomics Center Zurich of the University/ETH Zurich. Spectra were acquired from 300-1,700 m/z with an automatic gain control target of 3 × 10 6 , a resolution of 70,000 (at 200 m/z) and a maximum injection time of 110 ms. The quadrupole isolated precursor ions with a 2.0 m/z window, a 5 × 10 4 automated gain control value and a maximum fill time of 110 ms. Twelve of the most intense precursor ions for each MS 1 scan were fragmented via high collision dissociation with a normalized collision energy of 25, scanned with a resolution of 35,000 (at 200 m/z) and a fixed first mass of 200 m/z. An intensity threshold of 9.1 × 10 3 was applied for MS 2 selection, and singly charged ions were excluded. Filter criteria for MS 2 selection were an intensity threshold of 9.1 × 10 3 , and unassigned, singly charged ions were excluded. Selected precursor ions were put onto a dynamic exclusion list for 30 s. For liquid chromatography, the solvent composition at the two channels was 0.1% formic acid in water for channel A and 0.1% formic acid in acetonitrile for channel B. Next, 4 µl of each peptide sample was loaded onto a trap column (Symmetry C18, 100 Å, 5 µm, 180 µm × 20 mm; Waters AG) with a flow rate of 15 µl min −1 of 99% solvent A for 60 s at room temperature. Peptides eluting from the trap column were refocused and separated on a C18 column (HSS T3 C18, 100 Å, 1.8 µm, 75 µm × 250 mm; Waters AG). The column temperature was 50 °C. Peptides were separated over 73 min with the following gradient: 8-22% solvent B in 49 min, 22-32% solvent B in 11 min and 32-95% solvent B in 5 min. The column was cleaned with 95% solvent B for 5 min after the separation and re-equilibrated at loading condition for 8 min before initializing the next run. Potential contamination was monitored using extraction blanks. Tandem mass spectra were converted to Mascot generic files by MSConvert version 3.0.11781 using the 100 most intense peaks in each spectra. All tandem mass spectrometry samples were analysed using Mascot (Matrix Science, version 2.6.0). Mascot was set up to search the SwissProt Release 2019_08 database (560,823 entries) assuming the digestion enzyme trypsin, with automatic decoy option. Mascot was searched with a fragment ion mass tolerance of 0.050 Da and a parent ion tolerance of 10.0 ppm. The number of missed cleavages was specified as one. Carbamidomethyl of cysteine was specified in Mascot as a fixed modification. Deamidation of asparagine and glutamine and oxidation of methionine and proline were specified in Mascot as variable modifications. Scaffold version 4.9.0 (Proteome Software Inc.) was used to validate protein and peptide identifications for each sample. Peptide identifications were accepted if they could be established at greater than a 90% probability by the PeptideProphet algorithm. Protein identifications were accepted if they could be established at a greater than 95% probability and contained at least two unique peptides. Probabilities for proteins were assigned using the ProteinProphet algorithm 98 . Proteins that contained similar peptides that could not be differentiated based on tandem mass spectrometry analysis alone were grouped to satisfy the principles of parsimony, and proteins that shared significant peptide evidence were grouped into clusters. Peptide identifications were accepted if they could be established at a greater than 90% probability using the PeptideProphet algorithm 99 with Scaffold delta-mass correction. Individual protein and peptide false discovery rates are listed in Supplementary Data 3. Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Raw data files are available through the ProteomeXchange Consortium via the PRIDE partner repository under accession PDX027728. Source data are provided with this paper. March 2021 Corresponding author(s): Christina Warinner Last updated by author(s): Jan 24, 2022 Reporting Summary Nature Portfolio wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency in reporting. For further information on Nature Portfolio policies, see our Editorial Policies and the Editorial Policy Checklist. Statistics For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information. Data Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A description of any restrictions on data availability -For clinical datasets or third party data, please ensure that the statement adheres to our policy Raw data files are available through the ProteomeXchange Consortium via the PRIDE partner repository under accession PDX027728. 2 nature portfolio | reporting summary March 2021 Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection. Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf Ecological, evolutionary & environmental sciences study design All studies must disclose on these points even when the disclosure is negative. Study description This study conducted shotgun tandem mass spectrometry on proteins present in human dental calculus obtained from 29 archaeological sites in Russia and Azerbaijan. Proteins were identified using Mascot and validated using Scaffold, and dietary proteins were analyzed to reconstruct past dairy pastoralism strategies. Research sample 46 human dental calculus specimens were analyzed from 45 individuals at 29 archaeological sites. A comprehensive overview of these specimens is provided in Dataset S1, and detailed osteological and archaeological context data is provided in the SI.
2023-02-10T15:06:03.257Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "0fb14577dc334c7a939241e34780950566dbf34d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41559-022-01701-6.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "0fb14577dc334c7a939241e34780950566dbf34d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
15671180
pes2o/s2orc
v3-fos-license
Stem Cell Antigen-1 (Sca-1) Regulates Mammary Tumor Development and Cell Migration Background Stem cell antigen-1 (Sca-1 or Ly6A) is a glycosyl phostidylinositol (GPI)-anchored cell surface protein associated with both stem and progenitor activity, as well as tumor initiating-potential. However, at present the functional role for Sca-1 is poorly defined. Methodology/Principal Findings To investigate the role of Sca-1 in mammary tumorigenesis, we used a mammary cell line derived from a MMTV-Wnt1 mouse mammary tumor that expresses high levels of endogenous Sca-1. Using shRNA knockdown, we demonstrate that Sca-1 expression controls cell proliferation during early tumor progression in mice. Functional limiting dilution transplantations into recipient mice demonstrate that repression of Sca-1 increases the population of tumor propagating cells. In scratch monolayer assays, Sca-1 enhances cell migration. In addition, knockdown of Sca-1 was shown to affect cell adhesion to a number of different extracellular matrix components. Microarray analysis indicates that repression of Sca-1 leads to changes in expression of genes involved in proliferation, cell migration, immune response and cell organization. Conclusions/Significance Sca-1 exerts marked effects on cellular activity and tumorgenicity both in vitro and in vivo. A better understanding of Sca-1 function may provide insight into the broader role of GPI-anchored cell surface proteins in cancer. Introduction Stem cell antigen-1 (Sca-1 or Ly6A) is a member of the Ly6 family of glycosyl phostidylinositol (GPI)-anchored cell surface proteins. Sca-1 has been long associated with murine stem/ progenitor cells [1] and is localized to lipid rafts where it regulates signaling complexes [2]. Functional studies using Sca-1-null mice have revealed several phenotypes. Interferon-stimulated hematopoietic stem cells (HSCs) upregulate Sca-1 in a Stat1-dependent manner. Additionally, minor defects in lineage skewing were observed in the hematopoietic system of Sca-1-null mice. Osteoporosis and reduced muscle size were observed in aging Sca-1-null mice. Moreover, Sca-1 is necessary for matrix metalloproteinase (MMP) activity during muscle repair. When Sca-1 or other Ly6 family members are upregulated on tumor cells they are commonly associated with an aggressive phenotype [7]. Sca-1 + cells are expanded in mammary tumors induced by Wnt/b-catenin pathway [8,9]. Despite its association with stem/progenitor cells, little is known about the biological function of Sca-1. To address this question in the context of mammary tumor development, we used a cell line derived from primary tumors of MMTV-Wnt1 transgenic mice, which retained high expression of Sca-1 and could be transplanted into the cleared fat pad of syngenic mice. We found that Sca-1 promotes cell migration and decreases cell adhesion in vitro and regulates tumorigenicity upon transplantation. Furthermore, Sca-1 regulates gene expression in multiple pathways involved in tumor progression. This study demonstrates that modulating Sca-1 expression has profound effects on cellular function and tumor development. Sca-1 promotes cell migration Sca-1 is localized to lipid rafts [2] similar to urokinase plasminogen activator receptor (UPAR), another well-character-ized Ly6 family member. UPAR regulates adhesion, migration and angiogenesis in breast cancer [10]. Therefore, we asked whether Sca-1 regulates cell migration using a mammary tumor cell line (Wnt1-YL), derived from primary MMTV-Wnt1 tumors. Previous studies [4,8] revealed high levels of Sca-1 expression in MMTV-Wnt1 induced hyperplasia and tumors, and we were able to develop several cell lines from these tumors. The Wnt1-YL cells uniformly express high levels of Sca-1 as detected by flow cytometry ( Figure 1A). We then knocked down Sca-1 surface expression using shRNA lentiviral technology. A shift in mean fluorescence intensity revealed Sca-1 surface expression was reduced ,30-fold in the Wnt1-YL-shSca1 (shSca-1) as compared to control cells transduced with an shRNA targeting luciferase (shLuc) ( Figure 1A). This reduction in Sca-1 expression did not alter cell growth as assessed by a growth curve over the period of 4 days ( Figure 1B). When cell migration was assessed using a wound healing scratch monolayer assay, shSca-1 cells exhibited a significantly slower cell migration at 12-24 hours, ( Figure 1C and D). A rescue experiment was next performed by reintroduction of a Sca-1 expression construct containing an altered shRNA-binding site, to rule out off-target effects of the Sca-1 shRNA. Re-expression of Sca-1 reversed the migration phenotype, demonstrating the specificity of the shRNA knockdown ( Figure 1C and D). This was also demonstrated independently by microarray analysis in which Ly6a, but not other Ly6 family members, was selectively knocked down by these shRNAs (Table S1). An early lag phase between 0-12 hours was observed in these rescue experiments where there is a significant difference between the Figure 1. Repression of Sca-1 delays cell migration. Flow cytometry analysis of Sca-1 surface expression, representative histograms (A). Growth curve of shLuc (blue), shSca-1 (red) and shSca-1+Sca-1 (black) cells (B). Images of scratch monolayer migration assay at times 0, 12, and 24 hours (C-K), representative images of 3 experiments performed in triplicate. shLuc (C-E), shSca-1 (F-G), and shSca-1+Sca-1 (I-K), scale bars = 200 mm. Cell migration graph (L). shLuc (blue), shSca-1 (red), shSca-1+Sca-1 (black), * represents statistical significance compared to shLuc control, (* = p,.05, *** = p,.001). doi:10.1371/journal.pone.0027841.g001 shLuc control cells and the shSca-1+Sca-1 rescued cells; however, this delay was overcome by 18 hours. Notably, the cells appeared to migrate collectively as a sheet of cells rather than as single cells. Sca-1 regulates cell adhesion We hypothesized that the delay in migration in the shSca-1 cells was attributed to alterations in cell adhesion. In order to determine if Sca-1 regulates cell adhesion, we evaluated the adhesion of the Wnt1-YL cells to a panel of extracellular matrix (ECM) proteins (collagen I, collagen IV, fibronectin, laminin, and vitronectin). shSca-1 cells showed increased adhesion to fibronectin, collagen I, collagen IV, and laminin compared to control cells ( Figure 2A). In rescue experiments, adhesion of shSca-1+Sca-1 cells to collagen I, collagen IV, and fibronectin returned to levels similar to the shLuc control cells (Figure 2A). The increase in adhesion to laminin was enhanced in shSca-1+Sca-1 cells ( Figure 2A). Additionally, each group exhibited relatively weak adhesion to vitronectin, however, the shSca-1+Scal-1 cells showed reduced adhesion compared to control cells (Figure 2A). These results suggest that delayed migration exhibited by shSca-1 cells may be due to increased cell matrix interactions. To investigate the possible cause for these altered adhesive properties, we evaluated the surface expression of integrins (receptors for ECM proteins) by flow cytometry. We analyzed a panel of integrins (a2, a3, a5, a6, aV, b1, b3, and b4) expressed in normal mammary epithelial cells. a2, a5, a6, aV and b1 were all expressed ( Figure 2B-F); however, only a5-integrin showed a difference in expression level between the shSca-1 and control cells. a5-integrin expression increased 1.7 fold in shSca-1 cells ( Figure 2C). Notably, a6 and b1, which bind laminin as a heterodimer were expressed at high levels as compared to their respective isotype controls ( Figure 2D, F). Interestingly, these receptors have also been used to isolate cancer stem cells in p53 2/ 2 mammary adenocarcinomas [11]. These data do not rule out the possibility that integrin activity rather than expression may be altered by Sca-1. Alternatively, integrins that were not evaluated may be aberrantly expressed accounting for the differences in cell adhesion. Repression of Sca-1 increases tumor propagation ability and accelerates tumor growth We next determined if the alterations in migration and adhesion in Sca-1-deficient cells would influence tumor propagation and growth in vivo. To determine if the repression of Sca-1 effects tumor outgrowth, shSca-1 or shLuc cells were transplanted into the cleared mammary fat pad of 3-4 week old syngenic recipient mice, at concentrations ranging from 500-10,000 cells. Limiting dilution transplantation revealed that shSca-1 cells display a 9-fold increase in tumor propagating potential (1/654) as compared to shLuc control cells (1/5963; p..001) ( Table 1, [12]). A significantly greater number of shSca-1 tumors were observed at lower cell concentrations 2000-500 cells (Table 1). We subsequently studied tumor latency with injections of 10,000 cells, since both knockdown and control cell lines efficiently develop tumors at this concentration. shSca-1-derived tumors had a median latency of 2 weeks as compared to 5 weeks for shLucderived tumors ( Figure 3A). To determine if the accelerated tumor development was due to increases in proliferation in the shSca-1 cells or increased cell death in the shLuc control cells, tumor sections were stained for BrdU and a TUNEL assay, respectively. This was investigated in both early histological lesions and in palpable tumors. When mammary fat pads were harvested 2 weeks following transplantation, and early lesions analyzed, 10% of the shLuc tumor cells were BrdU-positive in comparison to shSca-1 tumor cells in which 20% were BrdU-positive ( Figure 3B-C, F). However, in established tumors this two-fold difference in proliferation was not observed, and approximately 20% of the cells were BrdU-positive cells in both groups ( Figure 3D-F). Neither early lesions nor established tumors showed differences in cell death ( Figure S1), suggesting that the difference in latency was attributed to a transient increase in proliferation in the early lesions. These data are consistent with the limiting dilution transplantation results and suggest that knockdown of Sca-1 influences tumor initiation in this model, but appeared to have little effect in established tumors. To determine the histological characteristics of these tumors, we performed immunostaining for mammary epithelial markers (K5, K8, pan-Keratin). Interestingly, hemotoxylin and eosin staining showed a mesenchymal (spindle-shaped) morphology distinctly different from the transgenic MMTV-Wnt1 tumors from which the cell line was derived. The tumors were positive for basal marker K5 (Figure 4), while only a small percentage of the cells were positive for luminal keratin marker K8, further suggesting a divergence from the parental MMTV-Wnt1 tumors, which expressed both basal and luminal keratins [8]. Identification of differentially expressed genes To determine the potential mechanisms by which Sca-1 regulates cell migration, adhesion, and tumor development, we performed an Affymetrix mouse genome 430A 2.0 array on cDNA comparing shLuc and shSca-1 from cells grown in vitro. The array identified 448 unique genes (574 Affymetrix probe sets) with p,0.01 and fold change .1.5 (Table S1). One hundred and twenty six genes were upregulated, and 322 genes were downregulated ( Figure 5A). Importantly, Sca-1 was the only Ly6 family member on the chip that was significantly downregulated. Differences in gene expression of several genes were verified by qRT-PCR analysis ( Figure S2). Repression of Sca-1 lead to the upregulation of several inflammatory chemokines: chemokine (c-c motif) ligand 2 (Ccl2), Ccl7, chemokine (c-c motif) ligand 5 (Cxcl5). Additionally, repression of Sca-1 altered the expression of genes involved in proliferation, cell movement, cell-cell signaling and cell organization ( Figure 5B). Discussion Sca-1 is widely accepted as a stem/progenitor cell marker in normal mouse tissues [3,[13][14][15]. However, Sca-1 eGFP/eGFP mice did not exhibit a reproducible phenotype on mammary gland development in our laboratory (unpublished data). Previous studies have shown that Sca-1 positive cells are expanded in Wnt/b-catenin induced mammary tumors [8,9]. Additionally, a Ly6 family member, Ly-6D is upregulated in a variety of murine tumors and triple-negative breast cancers [16]. Despite these associations there is limited knowledge of functional role of Sca-1. Our findings indicate that Sca-1 plays an important role in mammary tumorigenesis as revealed using a novel cell line derived from MMTV-Wnt-1 mouse mammary tumors. First, Sca-1 promotes cell migration and affects cell adhesion to several ECM substrates in vitro. Second, Sca-1 regulates the frequency of tumor propagating cells and tumor cell proliferation in early lesions. These studies point to epithelial-ECM interactions as mediators of Sca-1 function; however, direct effects on down- There are several plausible explanations for our observations. First, Sca-1 may directly (or indirectly) interact with integrins modulating their ability to heterodimerize and bind ECM proteins, and/or modulating the strength of integrin-ECM interactions. The increased expression of a5-integrin in shSca-1 cells likely accounts for the increased adhesion to fibronectin via the a5b1 heterodimer. a5-integrin has been implicated as a suppressor of metastasis and a6-integrin promotes metastasis in breast cancer cell lines [17]. Since both of these integrins are expressed at similar levels in the Wnt1-YL cells, further investigation is required to fully define the relationship between Sca-1 and the role of integrins in this system. Next, Sca-1 may interact with non-integrin receptors such as growth factor receptors that cooperate with integrin signaling [18][19][20]. Alternatively, Sca-1 may alter interactions with cell surface receptors that act independently of integrin signaling. Additionally, Sca-1 may regulate the activation of MMPs leading to the release/activation of growth factors stimulating proliferation of tumor cells as observed in skeletal muscle cell regeneration [21]. The Wnt1-YL cells exhibited collective cell migration as a sheet in a scratch monolayer assay. In transwell migration assays, evaluating single cell migration across a porous membrane we did not show statistically significant differences in migration (data not shown). Furthermore, when the cells were seeded in matrigel (laminin-rich matrix) for a 3D morphogenesis/invasion assay the cells did not exhibit differences in terms of acini/colony formation frequency, size, morphology or invasive properties (data not shown). These observations suggest that Sca-1 is responsible for subtle changes in cell-cell and cell-ECM interactions in this cell line. Deciphering these subtleties in the context of cell migration and invasion may require further investigation of this cell line on matrices of single ECM substrates. Interestingly, repression of Sca-1 alters chemokine expression, influencing the recruitment of inflammatory infiltrates. Since immune cells influence many processes including angiogenesis, cell invasion, matrix remodeling, interactions between tumor cells and the immune system have become of increasing interest in the past decade. Immune cells in both the innate and adaptive immune systems have proved to be important in tumor development and metastasis [22][23][24][25]. Chemokine secretion from shSca-1 cells may recruit immune cells with pro-tumor activities accounting for the accelerated tumor growth. Furthermore, Sca-1 not only regulates chemokine expression, but Wnt1-YL cells grown in culture show differential secretion of both cytokines and chemokines (data not shown). Also, insulin degrading enzyme (Ide-1), a protein that physically interacts with Sca-1 to regulate differentiation skeletal muscle cells [2], was down regulated in shSca-1 cells. Ide-1 catalyzes the degradation of mitogenic peptides attenuating proliferative signals. Reduction in this activity may also account for proliferative response seen the shSca-1 tumor development. Additionally, Fgf20, a Wnt/b-catenin target gene, was up regulated in response to Sca-1 repression. Cooperation between the Wnt/b-catenin and FGF signaling pathways has been reported in human cancers and our laboratory has previously shown a strong association leading to rapid proliferation upon simultaneous activation of these pathways [20]. Thus, Sca-1 potentially regulates multiple aspects of tumor development. The impact of these changes in mRNA expression needs to be determined with regard to protein expression and activity to better understand the role of Sca-1 in tumorigenesis. Recently, Upadhyay and colleagues showed that Sca-1 inhibited TGF-b signaling by disrupting the heterodimerization of the TGF-b receptors and repressing expression of Gfd10, a TGF-b ligand, in a mammary adenocarcinoma cell line induced by medroxyprogestrone (MP) and 7,12-dimethylbenz(a)anthracene (DMBA) [26]. Their tumor outgrowth data indicate that repression of Sca-1 reduces tumorigenicity or outgrowth potential as observed in normal mammary epithelial cells. Similarly, another report shows delayed tumor development in MP/DMBA induced tumors in Sca-1 knock-out mice [27]. In this case, the delay in tumor development was attributed to the upregulation and activation of PPARc. In contrast, our data indicate that Sca-1 may restrict cell growth. There are several explanations for these discrepancies. First, the tumors were developed under different conditions likely driven by different signaling pathways, which have been shown to yield very different tumor histopathologies [8]. Second, the relative level of Sca-1 on the cell surface is likely to govern how Sca-1 regulates signaling activities [1]. This may also account for the lack of overlapped genes in the microarrays when comparing the data of Upadhyay et. al. and our data set. Yuan et. al. only shared 15 genes in common with our data set with a p,0.01 and a fold change .1.5 (Table S2). These genes were all upregulated, but did not reveal an enrichment of a common functional pathway. Since the tumor cell models employed in the two studies were developed using different methods, it is likely that they express Sca-1 at different levels. Furthermore, it is unlikely that the efficiency of Sca-1 repression is the same as different shRNA constructs were used. Cell context no doubt plays an important role in influencing the effects of Sca-1 in tumors that may have been derived from very different cell lineages. For instance, CD24 high /Sca1 2 luminal mammary epithelial cells (MECs) do not express ER and PR and have increased in vitro progenitor activity in contrast to CD24 high /Sca-1 + luminal MECs that are ER and PR positive with reduced in vitro progenitor activity [6]. Nevertheless, these studies highlight that Sca-1 likely regulates multiple cellular processes. In conclusion, we provide evidence that Sca-1 regulates multiple cellular functions in mammary tumor cells. Our data highlight the importance of studying Sca-1 in the context of tumor development. To definitively differentiate the roles that Sca-1 plays in tumor initiation and tumor progression it will be necessary to use a conditional system in which Sca-1 can be knocked at various stages of tumor development. Additionally, it may be necessary to evaluate the role of Sca-1 in tumor subpopulations in models in which tumor-initiating cells are present. Further investigations along these lines will lead to a better understand of GPI anchored protein functions in tumors. Ethics Statement Mice were maintained in accordance with the National Institutes of Health Guide for the Care and Use of Experimental Animals with approval from the Baylor College of Medicine Institutional Animal Care and Use Committee (Animal Protocol: AN-504). Lentiviral Transductions 293T-packaging cells were transiently transfected with pLKO-shRNA vectors (Open Biosystems), Gag-Pol and VSV-G plasmids using FuGENE 6 (Roche) according to the manufacturer's guidelines. Forty-eight hrs after transfection, virus-containing medium was collected from transfected 293T cells, filtered through a 0.45-mm syringe filter, and applied to Wnt1-2508 cells. The cells were spun at 300 g in a swinging platform rotor for 30 min. After 24 hrs, the lentiviral supernatant was removed from Wnt1-2508 cells and replaced with fresh medium. Forty-eight hrs later, cells were trypsinized and split at a low density with the addition of 4 mg/ml puromycin (Sigma) to select for transduced cells. Transplants Clearance of the mammary fat pad and MEC transplantation procedures were performed as previously described [28]. Cells were trypsinized with 0.25% Trypsin-EDTA (Invitrogen) and counted using a Vi-CELL XR Cell Viability Analyzer (Beckman Coulter). The designated number of cells were washed and resuspended in Hank's balanced salt solution (Invitrogen). The cells were injected into the cleared inguinal fat pad of 3-4 week old FVB/n mice (Harlan). Tumors were allowed to develop for up to 16 weeks. Growth Curve Cells were plated at 50,000 cells/well in 6-well plates and replenished with fresh medium every 48 hrs. Cells were trypsinized and counted every 24 hrs 4 days using a Vi-CELL XR Cell Viability Analyzer (Beckman Coulter). Adhesion Assay 10 5 cells were seeded onto CytoMatrix TM Cell Adhesion Strips coated with BSA, Collagen Type I and IV, Fibronectin, Laminin, Vitronectin (Millipore) according to the manufacturer's guidelines. The cells were allowed to adhere for 1 hr at 37uC, non-adherent cells were washed off with PBS, and adherent cells were stained with 0.2% crystal violet. The relative attachment was determined by absorbance at 560 nm on a microplate reader and all samples were normalized to BSA coated wells. Statistical analysis was performed by one-way ANOVA followed by Tukey's multiple comparisons test. Migration Assay Cells were grown to confluence in 6-well plates. A p1000 pipet tip was used to make a scratch down the center of each well. Pictures were taken on an inverted microscope (Zeiss) every 6 hrs for 24 hrs to evaluate migration across the scratch. Following the scratch, the cells were wash and refreshed with complete media. Axiovision software (Zeiss) was used to measure the distance across each scratch. For each experiment, 3 fields along the scratch of each well were analyzed in triplicate for each sample. A two-way ANOVA followed by Bonferroni tests was used to compare the mean at each time point. mRNA Real Time-PCR cDNA templates were generated using a SuperScript II as previously described. Quantitative PCRs were run using SYBR Green reagent (Applied Biosystems) on a StepOnePlus thermocycler (Applied Biosystems), normalized to b-actin, and fold changes were calculated using the comparative CT (DDCT) method using StepOne software v2.0.1 (Applied Biosystems). Primer sequences for Sca-1, Ccl2, Ccl7, Cxcl5, Mmp-9, and S100a8 were obtained from (Roche Applied Science). Immunohistochemistry and Immunofluorescence Mice were injected with 3 mg/mL BrdU (0.01 mL/g body weight) two hrs prior to sacrifice. Tumors were harvested and fixed in 4% paraformaldehyde for two hrs on ice. Tissues were embedded in paraffin blocks and 6-8 mm sections were cut for immunostaining. Sections were boiled in sodium citrate antigen retrieval buffer for twenty minutes. Sections were blocked with a 5% BSA, 0.05% Tween-20 in PBS for immunohistochemistry and in 10% goat serum in PBS for immunofluorescence. Primary antibodies were incubated at 4uC overnight. BrdU (1:10; BD), K5 (1:10,000; Covance), K8 (1:5000; Univ. of Iowa), pan-Keratin (1:5000; Sigma). Microarray Total RNA was isolated from cells using TRIzol Reagent (Invitrogen) and then cDNA was made from total RNA with SuperScript II (Invitrogen) using random primers. cDNA was treated with RNase H (Invitrogen) to remove RNA. Microarray analysis was done with Affymetrix MG 430 2.0 chip. Statistical analysis was done with dChip software package (www.dChip.org), using PM-MM model and invariant set normalization. Differentially expressed genes were identified using two-sided t-test and fold change on log-transformed data. Java TreeView represented expression values as color maps [29]. Microarray data have been deposited into the Gene Expression Omnibus database (GSE30684) and followed MIAME requirements. Figure S1 Repression of Sca-1 did not alter cell death. TUNEL staining of tumor sections (A-E). Positive control, DNaseI-treated shLuc tumor section (A). shLuc and shSca-1 early lesions (B, C) and tumors (D, E). (TIF) Figure S2 qRT-PCR analysis of selected genes in shLuc and shSca-1 cells. Relative mRNA expression of Sca-1, Mmp-9, S100a8, Cxcl5, Ccl2, and Ccl7 (A-F, respectively). (TIF) Supporting Information Table S1 Differentially expressed genes in shSca-1 tumor cells. Listed are statistically significant genes with p,0.01 and a fold change .1.5 in shSca-1 cells compared to shLuc control cells. (XLS) Table S2 Upregulated genes associated with Sca-1 loss in tumors. Listed are the statistically significant genes upregu-lated with p,0.01 and a fold change .1.5 in both shSca-1 cells and MP/DMBA tumors in Sca-1 knockout mice. (XLS)
2018-02-15T19:22:56.546Z
2011-11-29T00:00:00.000
{ "year": 2011, "sha1": "3ab8739c936123fb2de94caa620ca67a855a69b6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0027841&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ab8739c936123fb2de94caa620ca67a855a69b6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17522620
pes2o/s2orc
v3-fos-license
Impact of statins on risk of new onset diabetes mellitus: a population-based cohort study using the Korean National Health Insurance claims database Statin therapy is beneficial in reducing cardiovascular events and mortalities in patients with atherosclerotic cardiovascular diseases. Yet, there have been concerns of increased risk of diabetes with statin use. This study was aimed to evaluate the association between statins and new onset diabetes mellitus (NODM) in patients with ischemic heart disease (IHD) utilizing the Korean Health Insurance Review and Assessment Service claims database. Among adult patients with preexisting IHD, new statin users and matched nonstatin users were identified on a 1:1 ratio using proportionate stratified random sampling by sex and age. They were subsequently propensity score matched further with age and comorbidities to reduce the selection bias. Overall incidence rates, cumulative rates and hazard ratios (HRs) between statin use and occurrence of NODM were estimated. The subgroup analyses were performed according to sex, age groups, and the individual agents and intensities of statins. A total of 156,360 patients (94,370 in the statin users and 61,990 in the nonstatin users) were included in the analysis. The incidence rates of NODM were 7.8% and 4.8% in the statin users and nonstatin users, respectively. The risk of NODM was higher among statin users (crude HR 2.01, 95% confidence interval [CI] 1.93–2.10; adjusted HR 1.84, 95% CI 1.63–2.09). Pravastatin had the lowest risk (adjusted HR 1.54, 95% CI 1.32–1.81) while those who were exposed to more than one statin were at the highest risk of NODM (adjusted HR 2.17, 95% CI 1.93–2.37). It has been concluded that all statins are associated with the risk of NODM in patients with IHD, and it is believed that our study would contribute to a better understanding of statin and NODM association by analyzing statin use in the real-world setting. Periodic screening and monitoring for diabetes are warranted during prolonged statin therapy in patients with IHD. Introduction In collaboration with the National Heart, Lung, and Blood Institute, the American College of Cardiology and the American Heart Association released updated guidelines for the treatment of blood cholesterol for primary and secondary reduction of atherosclerotic cardiovascular diseases. The Expert Panel identified specific patient groups who are most likely to benefit from statin therapy and recommended initiation of moderate-or high-intensity statin therapy based on the patient's risk profile. 1 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase inhibitors, statins, are proven to reduce major cardiovascular outcomes, [2][3][4] but there are concerns regarding submit your manuscript | www.dovepress.com Dovepress Dovepress 1534 lee et al the risk related to statin use. 5 Clinical trials reported that statins reduced the risk of type II diabetes mellitus (T2DM) or were beneficial for reducing coronary events in people with T2DM. 6,7 However, more recently, studies have raised concerns regarding the risk related to the use of statins. One of the most noticeable issues is that statin use may increase the risk of developing T2DM. 1,[8][9][10] T2DM affects .300 million individuals and contributes to significant morbidities and mortalities worldwide. 11 T2DM has been recognized as an independent risk factor for ischemic heart disease (IHD), and evidence shows that in patients with established IHD, comorbidity of T2DM significantly increases IHD-related mortality rate. 12 T2DM is increasing especially in Asian countries, and studies have shown that Asian individuals are at higher risk of developing T2DM than people of European ancestry. 13 Nevertheless, only a small number of Asians were included in pivotal clinical trials, and clinical practice guidelines do not consider ethnicity in their recommendations for optimizing statin therapy in patients with cardiovascular diseases. 1,8,[14][15][16] Data suggest that Asian individuals are more sensitive to statin therapy and hence adverse effects may be greater. 17,18 The overall effects of statin therapy on T2DM in Asian patients with IHD are largely unknown, and little attention has been given to possible differences among statin agents and intensities. Therefore, we utilized the Korean Health Insurance Review and Assessment Service (HIRA) claims database to evaluate the association between statin use and new onset diabetes mellitus (NODM) in patients with IHD. Data source This was a retrospective cohort study conducted using the Korean HIRA database. The database consists of records, which health care institutions submit for medical claim reimbursement to the HIRA, of all the beneficiaries of the Korean National Health Insurance program. The National Health Insurance program is a universal health care system that allows beneficiaries to access any of the contracted medical facilities and institutions in Korea with low co-payment. 19 Out of pocket costs apply to all enrollees for hospital and pharmacy visits. Those who are unable to afford co-payments are covered by the national insurance and exempted from copayments. Therefore, the HIRA database consists of records of all Koreans including the lowest socio-economic classes. The database comprises clinic and hospital visit records that consist of patient information such as age, sex, diagnosis, medical procedures and services, type of health care institution, dates of clinic visits, admission dates, discharge dates, length of hospitalization, and medical specialty. Additionally, it consists of information regarding prescribed medications, such as brand and generic names, single administration doses, total daily doses, strength, route of administration, prescription date, and prescribed total day supply. The diagnoses were coded using the sixth revision of the Korean Standard Classification of Diseases, which reflects the tenth revision of the International Classification of Diseases. The medical procedures and services were coded according to the Current Procedural Terminology. Prior to obtaining the data set, HIRA encrypted the original identification of each patient to protect patient privacy. The authors were also blinded to each patient's full personal identification number. The study obtained an official approval from the HIRA inquiry commission in replacement of the authors' institutional review board. The HIRA inquiry commission deemed patient consent not necessary. Patient population The aforementioned medical information were obtained for all adult patients ($18 years of age) diagnosed with IHD (Korean Standard Classification of Diseases: I20 -25) between January 1, 2009 and December 31, 2012 ( Figure 1A). First, patients who initiated statin therapy at any point during the index period (January 1, 2010 to June 30, 2010) as statin users were identified. The date of the first statin prescription record was considered the index date for the statin users. We deliberately limited the statin users to patients with no recent history of statin use by excluding patients with record of statin use in the year preceding their index date. Patients were also excluded from the study if they were diagnosed with T2DM prior to the index date, if they had a history of antidiabetic medication use before the index date, if statins were initiated prior to or after the index period, if the length of statin therapy was ,12 weeks, or if they used nonstatin cholesterol-lowering agents. Patients who did not have a record of statin use in inpatient or ambulatory care claims at any time were considered as nonstatin users. The index date of the nonstatin users was the date of the first prescription record with no statin use that appeared in the HIRA database during the same index period (January 1, 2010 to June 30, 2010). Patients were also excluded if they were diagnosed with T2DM prior to the index date or had a history of antidiabetic medication use before the index date. exposure to statins In Korea, during the index period, the commercially available statin products were atorvastatin, rosuvastatin, simvastatin, pravastatin, lovastatin, fluvastatin, and pitavastatin. Patients who had used more than one statin agent, wherein therapy was switched from one statin to another, were classified into a complex group. The average daily dosage was calculated to determine the intensity of the therapy. Statin therapy intensities were assigned based on recommendations from the 2013 American College of Cardiology/American Heart Association guidelines. 1 Main outcome The primary outcome was NODM during statin therapy in previously statin-naïve patients with IHD ( Figure 1B). Follow-up started from the index date, and all patients were followed up until they developed NODM, were censored from the database owing to death, showed no record of medical claims for more than a year, or until December 31, 2012, whichever came first. NODM was defined as a record of a T2DM diagnosis and prescription of one or more antidiabetic agents. The risk of NODM was expressed as cases per persons (%, NODM cases divided by the number of statin users or nonstatin users), and the NODM incidence rate was expressed as incidence rates per 100 person-years which were NODM cases divided by total person-time for the each group (Tables 1 and 2). The cumulative incidence of NODM of statin users versus nonstatin users during the follow-up period is described as number of events per 10,000 people ( Figure 2). Covariates Covariates included sex, age, comorbidities such as hyperlipidemia, hypertension, heart failure, peripheral artery disease, unstable angina, cerebrovascular disease, chronic kidney disease, chronic liver disease, and history of myocardial infarction. These variables were considered possible confounders between statin use and risk of NODM. statistical analysis Statin users and nonstatin users were matched by age and sex using proportionate stratified random sampling. A propensity score (PS) analysis was then carried out on sampled cohorts with logistic regression on the demographic and preindex characteristics, including age and comorbidities to address selection bias and the presence of potential confounding variables. Demographic and clinical characteristics between statin users and nonstatin users were compared using the chi-square test for categorical variables and the t-test for continuous variables. Analyses were adjusted for age, sex, statin agents, intensities, and comorbidities. The cumulative rates of NODM during the follow-up period were estimated using Kaplan-Meier estimates of cumulative incidence (1 minus Kaplan-Meier estimator) and plotted according to time. The overall incidence rates and rates per 100 person-years were calculated and unadjusted and adjusted hazard ratios (HRs) were presented using a 95% confidence interval (CI). Cox proportional hazard regression model was used to examine the association between the use of statin therapies and the occurrence of NODM. Subgroup analyses were also performed according to individual statin agents and intensities. All analyses were carried out with SAS statistical software (version 9.4. for Windows; SAS Institute, Inc., Cary, NC, USA). Results Of the 4,341,608 adult IHD patients, 156,360 patients (94,370 statin users and 61,990 nonstatin users) were included in our analysis. The baseline characteristics of statin users and nonstatin users are presented in Table 3. At baseline, the mean (standard deviation) age was 60.84 (11.63) and 60.96 (11.92) years for statin users and nonstatin users, respectively, and 40.2% of the study population were $65 years. Male patients represented 44.8% and 44.6% for statin users and nonstatin users, respectively. Statin users were more likely to have hyperlipidemia (40.2% vs 8.1%), hypertension Among the statin agents, pravastatin had the lowest risk of NODM (adjusted HR 1.54, 95% CI 1.32-1.81) while those who were exposed to more than one type of statin, the socalled complex group, were at the highest risk of NODM (adjusted HR 2.18, 95% CI 1.89-2.51). The Kaplan-Meier estimates demonstrate that cumulative rates of NODM during the follow-up period were higher in statin users compared with nonstatin users, which correlated with the crude and adjusted HRs (P-value ,0.0001, Figure 2A). Sex-specific analysis showed that the adjusted NODM HR for statin users versus nonstatin users was more significant among male patients (adjusted HR 1.88, 95% CI 1.79-2.13) ( Table 2); the incidence of NODM was the most frequent among male statin users ( Figure 2B). Age-specific analysis showed that the risk of NODM was most significant among statin users under the age of 40 years (adjusted HR 5.71, 95% CI 4.00-8.18) ( Figure 2C and Table 2). In addition, there was a significant difference between the groups with respect to the time to NODM; average time to NODM was 329.9 days in the statin users and 465.5 days in the nonstatin users. Baseline characteristics and preliminary results of the study subjects prior to PS matching can be found in the supplemental document (Tables S1 and S2). Discussion This study, undertaken using a large-scale database that contains the medical information of ~50 million Korean individuals, showed that statin therapy is associated with an Subsequent meta-analyses confirmed the observed effect. 20, 21 Rajpathak et al examined the effect of statins on T2DM risk by conducting a meta-analysis on six primary and secondary prevention trials, totaling 57,593 patients. Compared to the placebo group, those who received statin therapy had a 13% higher risk of T2DM (relative risk 1.13, 95% CI 1.03-1.23). 20 Results of a meta-analysis of 13 randomized controlled trials of statins with 91,140 patients showed the same trends; statin therapy was associated with a 9% increased risk of incident diabetes (odds ratio [OR] 1.09, 95% CI 1.02-1.17). 21 Our results followed a trend similar to that of previous studies; 8,20,21 however, the effect of statin-induced NODM appears to be much greater in our study population (adjusted HR 1.84, 95% CI 1.63-2.09). The adverse effects of statins seem dose related. 5,22,23 In addition, it could be attributed to the intensity of statins since the majority was treated with moderate-to high-intensity statins according to the American College of Cardiology/American Heart Association guideline recommendations. Our results demonstrated that moderate-intensity therapy is most likely to be associated with NODM risk (adjusted HR of low-, moderate-, and high-intensity groups were 1. 10,24 In a study of 136,936 patients hospitalized for a recent cardiovascular event or procedure, it was revealed that high-potency statins were associated with significantly higher NODM risk compared to lower-potency agents (rate ratio 1.15, 95% CI 1.05-1.26). 10 The definition of high potency statins described in this study correlates with our definition of moderate-intensity therapy. Thus, this result is particularly relevant to our findings since the study design and the definition of the NODM endpoint are comparable to those of our study. The reason that high-intensity statins seem to have a lower risk of NODM than moderate intensity is likely because a very small number of high-intensity statin users were included in the analysis and this may not be enough to represent all the high-intensity statin users. Future studies are warranted to verify whether NODM risk is truly intensity dependent. As shown in Figure 2C, among the statin users patients aged #40 years are most likely to develop NODM and the risk appears to be lower with older age groups. This observation is in line with the findings of a recent study evaluating differential impact of statins on NODM in different age groups in Taiwanese women. 25 The study was limited to female patients aged $40 years; however, the result is worth noting that statin-related NODM was more evident in younger age groups (40- . As mentioned in this study, the impact of age on statin-associated NODM is unclear and controversial. The finding seems counterintuitive since it is reasonable to assume that older statin users are at higher risk of NODM since well-known T2DM risk factors including inactivity, hypertension, and hyperlipidemia are more common in this population. The results are inconclusive in this regard, and therefore, further investigation is encouraged to explain the true impact of age on statin-associated NODM. Recent data examined the risk of NODM among patients treated with different statin agents. Pravastatin as a reference drug, the risk of NODM was higher with simvastatin (adjusted HR 1.10, 95% CI 1.04-1.17), rosuvastatin (adjusted HR 1.18, 95% CI 1.10-1.26), and atorvastatin (adjusted HR 1.22, 95% CI 1.15-1.29) while other statin agents showed no more increased risk of NODM. 26 On the other hand, the effects of different statin agents were compared using no statin therapy as a reference. All types of statin therapies were associated with an increased risk of NODM, and pravastatin was associated with the lowest risk of NODM (adjusted HR 1.54, 95% CI 1.32-1.81). However, associations between statin types and degree of NODM risk were inconsistent with previous studies. 25,27,28 According to our findings, switching between different statins increases the risk of NODM. Additionally, lovastatin, simvastatin, atorvastatin, and rosuvastatin were associated with over twofold increased risk of NODM, while risks of other statin agents were somewhat lower. We understand that the study populations differ among the studies, which may attribute to these different findings. Patients' underlying conditions could potentially explain these controversies, and thus, larger controlled trials are warranted to investigate the association. As with many established pharmacologic treatments, interethnic variability in the response to statin therapy has been reported. 17,29 A pharmacokinetic study analyzing plasma exposure to statins revealed that Asian patients' statin plasma concentrations were nearly twofold higher than White subjects living in the same environment. 17 It suggests that Asian individuals may be more prone to adverse effect. Our observations are in line with the findings of a recent study analyzing the association between moderateintensity statin and NODM among hospitalized patients in Korea (OR 1.99, 95% CI 1.00-3.98). 30 Unlike the aforementioned studies where the majority of the patients were non-Asians, 3 Not only that Asians are more sensitive to statins which predisposes them to a higher risk of NODM, studies have proposed that intensive therapy is no more effective in reducing major coronary events than moderate to low-dose statin therapies. 36,37 Given these facts, it is postulated that Asian individuals may not require the guideline-recommended statin therapy to achieve clinical benefits. Low-intensity statins may be sufficient to adequately prevent IHD complications in Asian patients, while minimizing the risk of NODM. Several possible mechanisms have been suggested to explain the diabetogenic effects of statins. One plausible hypothesis is that statins cause T2DM by altering glucose homeostasis through both impairment of insulin secretion and diminished insulin sensitivity. 38 Statins modify glucose metabolism by reducing glucose uptake into skeletal muscles and adipose tissues secondary to downregulation of glucose transporters including insulin-sensitive solute carrier family 2, member 4 (SLC2A4, formerly known as GLUT4). 39 Decreased expression of SLC2A4 likely contributes to insulin resistance and consequently causes T2DM. Studies have shown that atorvastatin and simvastatin decrease the expression of SLC2A4 in adipocytes and insulin sensitivity, which could potentially affect the onset of T2DM. 39,40 In addition, statin-induced mitochondrial dysfunction in pancreatic beta-cells, skeletal cells, and adipocytes may lead to an impairment of insulin sensitivity and adiponectin secretion. 41,42 In pancreatic beta-cells, mitochondria play a critical role in linking glucose metabolism with insulin exocytosis; thus, defects in mitochondrial function block this metabolic coupling and cause beta-cell death. 41 Functional defects in adipocytes are linked to the dysregulation of glucose homeostasis and to insulin insensitivity. 42 The last potential mechanism of statin-induced T2DM involves an adiponectin, insulin-sensitizing, and antiinflammatory cytokines released from adipocytes. Insulin sensitivity was reduced as a result of decreased plasma concentration of adiponectin in lipophilic statins including rosuvastatin, atorvastatin, and simvastatin whereas it increased in pravastatin-treated patients. 43,44 These results support our findings that NODM risk with rosuvastatin, atorvastatin, and simvastatin is more significant than that with pravastatin. It appears that statin-induced NODM is more than just an intensity-dependent effect. Although it is not fully understood, lipophilicity of statins seems to have an influence on statin-induced NODM as well. Our results showed that statin-induced NODM risk is apparent in the real-world data, and the effect may be more significant in Asian individuals with IHD. Further studies, especially prospective designed studies, are warranted to validate our findings. Our study has some advantages since the national health insurance claims database was used, which contains information about nearly all the Koreans with IHD. Therefore, the results may be extrapolated to Asian individuals in general, having similar clinical conditions. Nonetheless, there are limitations to the present study. First, owing to the retrospective nature of the study, we were unable to verify whether the newly diagnosed diabetic patients had major risk factors for T2DM such as metabolic syndrome, impaired fasting glucose, increased body mass index, or elevated hemoglobin A1C. Additionally, the HIRA database does not contain information on other factors that could have influenced the analysis, such as height, weight, and family and social history, which would have permitted a more systematic evaluation of individuals. Second, validity of the claims data is limited. We had to rely on diagnoses recorded by treating physicians using the sixth revision of the Korean Standard Classification of Diseases, and thus the accuracy of recording was not verified. We overcame this limitation by strictly limiting NODM to those who had received a T2DM diagnosis on two separate occasions and who were prescribed anti-diabetic medications. This ensured that the statins' adverse effect was not overestimated. Lastly, because data were provided by a third party service, our data access was limited to the record of sampled cohort at the time of PS matching. Due to this limitation, PS matching was performed on the sampled cohort. We may have been able to have a more balanced cohort if PS matching was performed on the initial patient population. Based on the current evidence, it is concluded that all statin therapies are associated with an increased risk of NODM in patients with IHD. Our finding is consistent with results from recent studies, 10,21,24,45 although the adverse effect appears to be greater in our population. It is believed that our study contributes to a better understanding of the association between statins and NODM through the analysis of a real-world statin users. Especially in those aged #40 years, the risk of statin-associated NODM seems to be greater. Before initiating statin therapy in these patients, lifestyle modification should be emphasized and statin's potential benefits versus adverse effects need to be discussed. In addition, periodic screening and monitoring for T2DM may be warranted in all patients with IHD undergoing statin therapy.
2018-04-03T05:08:22.842Z
2016-10-11T00:00:00.000
{ "year": 2016, "sha1": "dbdfb61f3f7e93ea7bbc231e44f9a2e0dee12d5d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=32893", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6fd5804e2bbf8111f40360b714dbd1a16dbf5aa6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246063405
pes2o/s2orc
v3-fos-license
TweetBoost: Influence of Social Media on NFT Valuation NFT or Non-Fungible Token is a token that certifies a digital asset to be unique. A wide range of assets including, digital art, music, tweets, memes, are being sold as NFTs. NFT-related content has been widely shared on social media sites such as Twitter. We aim to understand the dominant factors that influence NFT asset valuation. Towards this objective, we create a first-of-its-kind dataset linking Twitter and OpenSea (the largest NFT marketplace) to capture social media profiles and linked NFT assets. Our dataset contains 245,159 tweets posted by 17,155 unique users, directly linking 62,997 NFT assets on OpenSea worth 19 Million USD. We have made the dataset public. We analyze the growth of NFTs, characterize the Twitter users promoting NFT assets, and gauge the impact of Twitter features on the virality of an NFT. Further, we investigate the effectiveness of different social media and NFT platform features by experimenting with multiple machine learning and deep learning models to predict an asset's value. Our results show that social media features improve the accuracy by 6% over baseline models that use only NFT platform features. Among social media features, count of user membership lists, number of likes and retweets are important features. INTRODUCTION Blockchain has emerged as a core disruptive technology that has transformed the financial ecosystem. The origin of Blockchain can be traced back to a 2008 whitepaper [12] published under the pseudonym Satoshi Nakamoto, who introduced blockchain in the context of the most popular crypto-currency, Bitcoin. Bitcoin uses Blockchain technology to develop the publicly distributed ledger used to record the transactions on its network. The growing interest in Blockchain technology especially its use in the financial domain from both retail 1 https://tinyurl.com/NFTValuation * denotes equal contribution. and institutional investors 2 has led to several new products emerging in the crypto-sphere to find the 'next big thing'. One such emerging Blockchain product that has captured large public attention is Non-Fungible Tokens or NFTs. An NFT is a token that certifies a digital asset to be unique. NFTs use blockchain to store anything that can be converted into digital files, for example, images, music, and videos. Blockchain technology enables the association of proof of ownership to the digital asset. NFTs have grown exponentially in 2021; this phenomenal growth has led to traditional auction houses being receptive to digital art NFTs. On March 11, 2021, 'Everydays: The First 5000 Days', an NFT artwork of the prominent digital artist Beeple, sold at Christie's for over $69 million. 3 Jack Dorsey (CEO of Twitter) raised an astounding $2.9 million for charity by auctioning his first tweet as an NFT. 4 All these sales demonstrate the adoption of NFTs by the mainstream community. The highly volatile NFT prices and sudden popularity led many people to create NFTs and in turn, promote and sell them for a profit. The most significant impact of NFTs has been on how they transformed the art world [10,17]. NFTs have allowed artists to sell their art outside the gate-keeping systems and taste-hierarchies [10]. NFT is a token representing digital assets stored on the blockchain with proof of ownership. Smart contracts [2] allow to transfer and retrieve information about an NFT through function calls. Some function calls such as 'transfer' are restricted to the owner of the NFT. Since the underlying blockchain is decentralized, no one can change the state of the contract or the NFT without making these function calls, ensuring high levels of security. Most NFTs are digital, meaning that consumers do not receive any physical items when they purchase them. In most circumstances, the NFT is only a proof of ownership, not of copyright; i.e., the owner does not have exclusive access to the content of NFTs. For example, 'disaster girl', a popular Internet meme, was sold as an NFT for $495,000 even though the exact image in the NFT is freely available and distributed throughout the Internet. 5 The value of the NFT came from the fact that it was sold by the girl featured in the meme. The same meme sold as an NFT by other people on the marketplace did not receive any traction. 6 Interestingly, most NFTs sold online can be downloaded and shared publicly for free. The value of an NFT is based on the perception of buyers which arises from the recognition of the creator and the overall marketing around the NFT itself. Further, unlike company stocks or cryptocurrency exchanges, which are traded at regular intervals, NFTs have sporadic sales. The transaction history of an NFT spans varied durations and the owner changes in each sale. A single sale cannot account for the overall value of an NFT as the next buyer may be willing to pay much more or less than the previous sale amount depending on their current perception of the NFT. Hence, we use the average of all sales to assign a value to the NFT. The tremendous volatility of crypto-currencies, the lack of any tangible asset and the speculative marketspace make the asset valuation task for NFTs extremely challenging. Unlike traditional assets, NFT asset valuation cannot be modelled directly as a mathematical economic system, but rather as a social phenomenon involving marketing schemes and the recognition and popularity of the NFT. There are various marketplaces to buy and sell different categories of NFTs, such as OpenSea 7 , Rarible 8 , SuperRare 9 , NiftyGateway 10 and Foundation 11 . OpenSea is the largest and most popular of all such marketplaces, with over 300,000 users with $3.4 Billion volume traded in August 2021 alone. Hence, we chose OpenSea to understand and model the NFT market. To explore the branding and context around the NFT, we look at social media, particularly Twitter as a vehicle for building public perception and attracting potential buyers. Fig. 1(a) shows a popular tweet and the corresponding NFT asset sold for over $150,000 on OpenSea. Several instances of well-known personalities like Mark Cuban, Jack Dorsey, etc. on Twitter selling high-priced NFTs indicate that social media reach can play a role in influencing asset value. More than 70% of the total traffic from social media on OpenSea is from Twitter. 12 Thus, we focus on Twitter to understand the influence of social media on the NFT market. We aim to assess if an asset is overvalued or undervalued based on our current asset valuation framework. In this paper, we aim to answer the following research questions: • RQ1. What is the relationship between user activity on Twitter and price on OpenSea? • RQ2. Can we predict NFT value using signals obtained from Twitter and OpenSea; and identify which features have the greatest impact on prediction? We collect tweets announcing NFTs and follow the linked OpenSea URLs to extract NFT sales information and metadata from OpenSea. Fig. 1 shows an example of a tweet announcing an NFT and its corresponding OpenSea listing page. In addition, we crawl NFT images to analyse whether the image in itself exerts an influence on the price. Analysing growth and price trends on OpenSea against popularity metrics from Twitter reveals the possible significance of social media features on NFT value. We motivate average selling price as the metric to assign value to an NFT due to limited sales and highly volatile prices of NFT. We further develop and analyse predictive models using Twitter, OpenSea and NFT image features to assess their impact on asset value. Overall, we make the following contributions in this paper: 7 https://opensea.io 8 https://rarible.com 9 https://superrare.com 10 https://niftygateway.com 11 https://foundation.app 12 https://www.similarweb.com/website/opensea.io/#social • To the best of our knowledge, we create the first ever data set for NFTs from OpenSea and their corresponding tweets. We have made the dataset public 1 in adherence to the FAIR (Findable, Accessible, Interoperable, Reusable) principles. • We build ordinal classification models to predict NFT asset value using features from both OpenSea and Twitter, as well as the NFT image itself. Our best model comprising of an ensemble of Twitter and OpenSea features achieves an accuracy of 69.5% in a 5-class classification setup. • We show that both Twitter and OpenSea features influence the model output. In contrast the predictive power of image features is limited. This shows how branding and metadata (Twitter and OpenSea features) have a stronger effect on the value compared to the NFT product itself (Image features). The remainder of the paper is organized as follows. In Section 2, we discuss related work on NFTs and asset valuation in the financial domain. We motivate and present our asset valuation problem setup in Section 3. Section 4 briefly introduces the OpenSea platform and its features, as well as blockchain-specific terms used in our analysis. We discuss our data collection pipeline and analyse the impact of Twitter on the OpenSea marketplace from the temporal dimension and value perspective in Section 5. In Section 6, we discuss multimodal models for NFT asset valuation. We present results using various models and feature subsets in Section 7. Finally, we conclude with limitations and future work in Section 8. RELATED WORK Since NFTs are a recent phenomena, there is limited research on NFT related data. Most of the previous work on NFTs is on analysis of relationship between blockchain, crypto-currency and NFTs. Recent work has analyzed the protocols, standards, desired properties, security, and challenges associated with NFTs [6,16]. Sudden rise of NFTs sparked an interest to find a correlation with more traditional crypto-assets. There has been a focus on understanding the correlation between the entire NFT market with Ethereum and Bitcoin [1] and also specific collections like Axie, CryptoPunks, and Decentraland [9]. These sub-markets have millions of dollars traded every day [9], which has led to critical analysis of these individual markets. Other studies focus on fairness in NFT submarkets like CryptoKitties [15]. To the best of our knowledge, there is limited work on NFT asset valuation, which is the focus of this work. Recently, Nadini et al. [11] used simple machine learning algorithms to develop a predictive model using sale history and visual features, but ignore social media features. Social media features have helped predict the prices of assets in traditional markets [4,14] like stock prices [13]. Hence, in this work, we focus on understanding impact of OpenSea and social media features for this task. THE NFT ASSET VALUATION PROBLEM NFT markets are highly illiquid in nature, which means sale prices are very volatile and irregular. Similar to physical art collections, the number of transactions per NFT is low as the buyers and sellers are a small niche of collectors. A traditional price prediction setting is not feasible or robust for each NFT as we do not have consistent periodic sales. In our dataset as well, a very small percentage (0.9%) of the total assets had more than 5 sales. Price prediction models used for stock or cryptocurrencies use a large number of historical data points gathered at regular intervals which is not available for an illiquid market like NFT. We propose an asset valuation task instead of a price prediction task to overcome the challenge of an illiquid market. We define asset value as the average selling price of an asset over all its historic sales. This compensates the large volatility in selling price, and provides a better indicator of asset valuation. Our objective is to provide a quantitative assessment of the NFT's value and identify whether the Twitter reach of the NFT influences the value. We thus divide the NFTs into multiple asset classes ( Very Low Value Asset, Low Value Asset, Medium Value Asset, High Value Asset, Very High Value Asset) based on the average sale price. For sake of brevity, we will refer to these classes as Class1, Class2, Class3, Class4 and Class5 respectively. The maximum average price was found to be orders of magnitude more than the minimum, hence the asset classes were binned into logarithmic divisions. Henceforth, we use the term 'asset value' to refer to the average selling price of the NFT. PRIMER ON OPENSEA PLATFORM OpenSea is the first and largest peer-to-peer marketplace for NFTs. It has attracted traders to trade assets, creators to launch their portfolios and developers to build integrated marketplaces for their applications. The primary product on OpenSea is called the "asset" which is a unique digital item stored as an NFT on the Blockchain. The transactions and ownership of the asset is programmed in a Smart Contract to store the link to the image, music or video in its metadata. Each asset is uniquely identified by its parent contract's address, unique token id and needs to be listed on OpenSea by the creator to be available for sale. OpenSea permits the following transactions on an asset: (1) listing NFT at a fixed price, (2) listing NFT at first price auction or a Dutch auction, (3) or direct offers from buyers. Once an asset is sold to a buyer, the buyer can resell it; thus, the asset keeps changing owner and price over time. Assets on OpenSea can be further grouped into collections. Collections are a group of homogeneous assets sharing common traits and properties. For example, the OpenSea page for one of the most popular collections, CryptoPunks (Fig. 2) contains similar assets with small variations. Most transactions on OpenSea are done on the Ethereum blockchain. Since every transaction on Ethereum requires a transaction fee, called gas-fee, accepting offers and buying assets on OpenSea are also associated with a gas-fee in addition to asset price. DATA COLLECTION AND ANALYSIS 5.1 Data Collection We collect data from two platforms: Twitter and OpenSea marketplace. We use the unique URL of the OpenSea asset to link these two data sources. Our data collection pipeline is illustrated in Fig. 3. Twitter Data. First, we curate 245,159 tweets from Jan 1, 2021, to Mar 30, 2021, that contain an opensea.io NFT asset link. A total of 17,155 unique users posted these tweets. Multiple tweets can reference the same OpenSea link. Our dataset contains 62,997 unique OpenSea assets belonging to 16,001 unique collections. We also collect additional information about the tweet (number of likes, number of retweets, tweet timestamp) and the source of the tweet (number of followers, number of followings, bio, account creation date). The additional meta-information allows us to model and understand the users posting about NFTs on Twitter. OpenSea Data. Apart from the social media information extracted from Twitter, the OpenSea platform also provides us with many valuable features about the asset. We keep only those assets created between Jan 1, 2021, to Mar 30, 2021 (same as our tweet collection period). We used the OpenSea API 13 to extract this additional information about the asset and its collection. Besides these features, we also crawl the associated NFT image. If the asset is a video or a GIF, we only extract and use the first frame. Overall, the dataset contains 62,997 images corresponding to unique assets. FAIR Dataset Principles. The gathered data consists of publicly available information about a social network, collecting and examining which would provide significant insights into the platform's characteristics. Our dataset also conforms to the FAIR principles. In particular, the dataset is "findable", as it is shared publicly. This dataset is also "accessible", given the format used (CSV) is popular for data transfer and storage. This file format also makes the data "interoperable", given that most programming 13 https://docs.opensea.io/ languages and software have libraries to process CSV files. Finally, the dataset is "reusable", as the included README file explains the data files in detail. The data was collected through public API endpoints of OpenSea and Twitter, adhering to their privacy policy. The data we collected was stored in a central server with restricted access and firewall protection. Twitter and OpenSea Interaction Analysis We first study the interaction between user activity on the OpenSea and Twitter platform and its influence on the asset value. We perform a temporal analysis of our dataset across both platforms. We also perform a basic correlation analysis of signals like average number of followers with asset price. Correlation between NFT popularity across platforms. We measure the NFT popularity on Twitter by aggregating features of all tweets that mention the asset. We have 245,159 tweets in our dataset, with the majority 89% of them being made in March 2021. We plot the daily number of tweets, and the NFT asset creation dates in Fig. 4. The Spearman's correlation coefficient ( ) for the two timeseries is 0.85 (p-value < 0.001), showcasing the strong positive correlation. More than half (54.6%) of the tweets are posted less than a day after the NFT creation. and its mention on Twitter. We plot the histogram of the delay between asset creation on OpenSea and its promotion on Twitter in Fig. 5. The distribution of the time delay approximately follows a log normal distribution. Such distribution has been observed in other studies on online information spread [8] and inter-activity times [3]. The inter-activity time is the duration between two consecutive tasks, like addition of followers on social media or sending of emails [3]. We also analyzed the impact, the delay can have on the asset value but we found no significant correlation ( < 0.05). Twitter Username Analysis. Next, we characterize the users on Twitter that post about these NFTs by analyzing their usernames. Twitter username is a strong indicator of affinity towards a cause or an organization. We computed character n-grams from usernames and manually inspected the most frequent ones. The most frequent relevant 3-gram turned out to be 'nft' which was present in 7.6% of the usernames. The other significant n-grams that came up were 'crypto', 'collect', and 'design'. Further, we partition users into two buckets: NFT affiliated (having 'nft' in their username) and non NFT-affiliated, and check their account creation dates. Fig. 6 shows that a large proportion of NFT affiliated accounts were created in the first quarter of 2021 indicating that they were specifically created to promote and push NFT related content. Also, over 60% of all NFT-affiliated accounts were created in March 2021, compared to only 18% of non NFT-affiliated accounts. We also found that the mean value of assets promoted by non NFT-affiliated usernames was marginally more (≈ 25%) than those by NFT-affiliated usernames. We attribute this to the large number of low quality accounts created to specifically push NFT content. Asset Value Analysis. To understand the relationship between the asset value and the popularity of the user, we plot the average number of followers (our proxy for user popularity) and the asset value of NFT in Fig 7. Since many users can promote an NFT, we took the average follower count of the users who tweeted about each NFT. Both asset value and number of followers are highly skewed; hence we create the log-log plot. Note that we filtered out the points with asset value and follower count less than one. We observe a weak positive correlation which suggests that an increase in the number of followers leads to a greater asset value. The Spearman's coefficient between the follower count and asset value on the log scale is 0.20, which indicates a weak positive correlation. The correlation, albeit weak, leads us to believe that social media features can help in asset value prediction. The following section discusses how we leverage features across Twitter and OpenSea platforms for accurate asset value prediction. FEATURES AND MODELS FOR ASSET VALUATION We train multiple machine learning and deep learning models using a mix of Twitter, OpenSea and Image features to predict the value of an NFT asset. Features We use 77 Twitter features and 19 OpenSea features. The salient features from all three aspects are captured in Table 1. In the following, we discuss these features in detail. level properties such as likes, replies, and retweets. Similarly, each NFT could be mentioned by multiple users; hence user account level Twitter attributes like followers counts, listed count, favorites were also aggregated across users. We used multiple aggregation functions (mean, max, min) to create a diverse feature set. OpenSea Features. We retrieve features about the NFT from the OpenSea platform. We use features like asset creation date, bid withdrawn and bid entered. Additionally, we engineer and gather several asset features like number of sales, number of bids and number of offers based on events like auctions, sales, offers, bids and transfers using the events endpoint of the OpenSea API. 'Offers' and 'Bids' both indicate an intent of purchase. Interested buyers can make offers to buy an asset with their desired amount. If their offer is accepted by the seller then the asset is transferred directly to their digital wallet. Buyers can make bids while an asset is on auction and the asset is transferred to the highest bidder on the expiration date. Problem Settings: Binary vs Ordinal We model the NFT asset valuation problem in two ways -a binary and an ordinal multiclass classification problem. Binary Classification. Our first objective is to gauge if selling an NFT can be a profitable venture. In cases when the NFT remains unsold, or is valued at a nominal amount, users are not able to cover the mandatory gas fees. In our dataset, 78% of assets were unsold or sold for less than $10. They form the loss bearing NFT class, while the rest of the assets sold for more than $10 form the profitable NFT class. We observe that our classes denote asset value intervals with an intrinsic order. In terms of asset value we observe the following order, Class 1 ≺ Class 2 ≺ Class 3 ≺ Class 4 ≺ Class 5. Since the classes are not independent, treating NFT asset valuation as a nominal multiclass classification problem is not entirely accurate. We hence model the problem as an ordinal classification problem. Formally speaking, for a class ordinal classification, we create − 1 binary classifiers, where the -th classifier predicts the probability P(X > ) for every NFT asset . We then use these classifiers to compute the probability of an asset belonging to a certain class. In our case of a 5-class ordinal classification, we train 4 binary classifiers. We compute the following probabilities: We assign the asset to the class with the highest probability score. We have established that ordinal classification has an intrinsic order between the classes. Due to this, standard error measures such as accuracy, precision, recall, F1-score and Mean Squared Error either ignore information (ordering of classes) or assume additional information (absolute distance between classes). We thus compute additional metrics to gauge our model performance thoroughly. Cardoso et al. [5] introduced a novel ordinal classification index for measuring the performance of ordinal classification and we use it to evaluate our models. The proposed coefficient conveys how far the outcome deviates from the ideal prediction and how inconsistent the classifier is with respect to the relative order of the classes. The range of values for the ordinal classification index is [0, 1] and smaller values indicate better ordinal classification. Models We experiment with multiple machine learning as well as deep learning models. 6.3.1 Traditional machine learning models. For both of the problems setups (binary as well as ordinal), we experiment with several machine learning models like logistic regression, SVM, random forests, lightGBM and XGBoost using Twitter and OpenSea features. CNNs for Image-based Predictions. Besides Twitter and OpenSea features, we also attempt to capture the influence of the NFT image in determining the asset value of the NFT on OpenSea. Here, we predict asset value using Convolutional Neural Network (CNN)-based classifiers. We experiment with two CNN architectures: ResNet-101 and DenseNet-121. We feed the representative image for the NFT asset as input. We train 4 binary classifiers and apply the Softmax function on the final layer to obtain class probabilities as discussed in Section 6.2.2. For each model, we use the Adam optimizer with a learning rate of 0.001. The images are augmented using Random Affine and Random horizontal flip to avoid model overfitting. The image pixel values are normalized and scaled to values between -1 to 1 and are grouped into batches of size 128. We use the cross-entropy loss. ASSET VALUATION RESULTS We tried multiple machine learning models like SVM, Logistic Regression, and Decision tree-based ensemble models like Random Forests and XGBoost (eXtreme Gradient Boosting). We use a 75% training and 25% test split for all the models. We performed several experiments by varying the classifier, problem setup (binary vs ordinal) and feature sets. Across all such combinations, XGBoost Classifier performed the best. For brevity purposes, we focus on results from XGBoost in this section. We use the average gain metric to compute XGBoost feature importance. It measures gains across all splits where the feature was used to assign importance score. All the results in the following sections use the XGBoost classifier and the average gain as the feature importance metric. Table 2 shows the binary/ordinal classification accuracy and the ordinal classification index using various feature set combinations. Tweet Features: Using just tweet features, we obtained an accuracy of 83.75% for the binary classification and 67.03% for the ordinal classification problem using XGBoost. The ordinal classification index for the model is 0.36. We observe that features like listed count, number of likes and replies and presence of NFT in Twitter username are the most important. OpenSea Features: The classifier provided an accuracy of 63.22% for the ordinal classification problem with an ordinal index score of 0.39. Since the asset level features include signals like number of sales, using these features for binary classification leads to almost perfect accuracy values. Thus, we report only ordinal classification accuracies when using OpenSea asset features in Table 2. It is interesting to note that even Twitter features alone could predict the asset value better than the inherent asset features. This indicates that social media features highly influence the value of an NFT. Twitter + OpenSea Features: Finally, we combine Twitter features along with OpenSea features. The ensemble model of Twitter + OpenSea features performs better than either of them individually with a final accuracy of 69.33% for the ordinal classification task, while the OpenSea features alone show an accuracy of 63.22%. Accuracy with Different Feature Sets There is a significant improvement of over 6 absolute percentage points in accuracy and ∼0.05 points from 0.39 to 0.34 in ordinal classification index when compared to using only OpenSea features. We computed feature importances and found that both the Twitter and the OpenSea features appear in the top feature importance list as shown in Fig. 8. We also observe that Twitter features like listed count, maximum listed count, maximum number of likes, replies have high importance scores. On the other hand, OpenSea features like offer entered, bids withdrawn, bid entered and is presale are important as well. Accuracy using Different Classifiers The scores for different models for the best performing Twitter + OpenSea ensemble are listed in Table 3. We observe that XGBoost leads to much better ordinal accuracy and the lowest ordinal index amongst all classifiers. Accuracy with CNNs We obtain accuracies of 54.62% and 52.94% with pretrained DenseNet-121 and ResNet-101 models respectively, for the multiclass ordinal classification task, which is lower than that of the Twitter model (67.03%) as well as OpenSea model (63.22%). Even in terms of the ordinal classification index values, we observe that the image-based model reports values of 0.46 and 0.43 respectively for Resnet-101 and Densenet-121 architectures, which is significantly worse than the models based on Twitter features (0.36), OpenSea features (0.39) and even their ensemble (0.34). The ResNet-101 model has an accuracy of 79.01% and the DenseNet-121 model has an accuracy of 79.44% in the binary classification setting. Regardless of the architecture used, the models using image information display lower accuracies and F1-scores and higher ordinal classification index values than those using Twitter and OpenSea features and their ensembles described in Table 2 Table 3: Ordinal accuracy and index for the final Twitter + OpenSea ensemble. XGBoost outperforms the other models. Similar trend was observed with other features sets. Lower ordinal index and higher ordinal accuracy are better. using the image features independently reinforce our initial hypothesis that the branding and context surrounding an NFT influences the asset value more than the content itself. CONCLUSION We have seen in the past how 'digital' has triumphed over the 'physical' with online advertisements, e-commerce, streaming services becoming more prevalent than their physical counterparts. Similarly NFTs have the potential to challenge the collectible and art market worth over $350 Billion. In this paper, we track the growth of NFTs and show how social media reach can impact its value. We build models to predict asset value and find that Twitter features listed count and username characteristic significantly affect the value of an asset. We layout the first work to characterise and value NFT assets using social media features. Our proposed system can be used to build a profitable trading strategy by identifying overvalued and undervalued assets. NFT marketplace is in its nascent stage with limited data compared to well established financial markets for instruments like stocks, bonds, and options. Hence utilisation of large data-hungry deep learning models is not viable. NFTs are a fast evolving industry with numerous challenges appearing daily. For example, on 28th October, an NFT asset -CryptoPunk 9988 was sold for over $532 Million USD. 14 However, the buyer and seller of the asset was the same person and the goal was to artifically inflate prices and become viral on social media. Our current system would not handle such outlier transactions and schemes. A better understanding of the security issues surrounding NFTs [7] will enable us to create more robust systems. In future, we plan to build systems to automatically
2022-01-21T02:15:53.354Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "6c300d3171d73574449397087f46f69d7b9e92c2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "53159239402524d851a1c228c95c72c40af9128c", "s2fieldsofstudy": [ "Computer Science", "Business", "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
10700100
pes2o/s2orc
v3-fos-license
Iron treatment and the TREAT trial Treatment with erythropoiesis-stimulating agents (ESAs) enables the correction of anaemia in chronic kidney disease (CKD) patients, thus reducing its symptoms and complications. Not only is iron therapy aimed at correcting iron deficiency, but also it is an adjuvant therapy in CKD patients receiving ESAs. Iron stores in CKD patients may be near normal, but there may be insufficient immediately available iron to optimize ESA therapy. In this context, iron therapy significantly reduces ESA dose requirements. Erythropoiesis following ESA therapy may precipitate iron deficiency in association with increased platelet production. In the TREAT trial, the ‘placebo group’ did not receive a true ‘placebo’ since 46% of the patients had at least one dose of ESA and achieved progressively increased haemoglobin (Hb) values during follow-up against the common observation. The patients in the ‘placebo’ group were treated more frequently with intravenous iron than the darbepoetin group. Given that many patients were relatively iron deficient at baseline, iron administration was successful in many of them in obtaining and maintaining partial anaemia correction without the need for ESAs, thus underlining the great importance of iron supplementation in correcting anaemia. The upper safety limit for iron administered to patients in order to minimize, as much as possible, the ESA dose and the upper limit for ESA dosage for maintaining the target Hb range as suggested by the current guidelines are still open questions. Anaemia develops early in the course of chronic kidney disease (CKD) and affects a large percentage of CKD patients; treatment with erythropoiesis-stimulating agents (ESAs) enables the correction of anaemia, thus reducing its symptoms and complications. ESA therapy should be given to treat anaemia in all CKD patients with a haemoglobin (Hb) level persistently below 11 g/dL, from patients in the early stages of CKD to those receiving renal replace-ment therapy [1,2] after having ruled out all other causes of anaemia. Dose requirements in achieving anaemia correction are quite variable and poorly predictable in the individual patient. However, a number of patients need a greater than usual ESA dose and are defined as hyporesponsives. According to the European Best Practice Guidelines (EBPG) [1], resistance to ESA treatment is defined as a continued need for > 20 000 IU/week (300 IU/kg/week) of rHuEPO administered subcutaneously or 1.5 μg/kg of darbepoetin alfa (>100 μg/week); this means that resistant patients require more than 2.5 times the average ESA dose. The most common cause of incomplete response to ESAs is absolute or functional iron deficiency. According to an Italian cross-sectional study [3], 16% of patients had a transferrin saturation of <15%, which is considerably below that recommended in the EBPG and in the National Kidney Foundation-Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines [1,2]. Angiotensin IIconverting enzyme inhibitors and angiotensin II receptor blockers, often used in CKD patients for controlling hypertension and possibly slowing down the progression of CKD, may also play a role. However, compliance should always be checked in patients self-administering an ESA. Iron status Iron status should be checked every 1-3 months according to clinical needs [1,2]. This information should be evaluated together with Hb levels, ESA doses and their trend over time, in order to elucidate the status of both external (gain or losses) and internal iron balance (distribution of iron in stores and erythrocytes) [2]. More widely used iron tests are serum ferritin and transferrin saturation (TSAT) levels. However, these are not optimal tests, as they lack accuracy and stability. Indeed, they are greatly influenced by inflammation and malnutrition, two conditions often affecting CKD patients. An ideal marker of functional iron deficiency should be independent of erythropoietic activity. New cell counters are able to determine cell volume and Hb concentration separately on reticulocytes and mature erythrocytes. Evidence for iron target in CKD patients not on dialysis is poor; a target of 100-500 ng/mL 2 for serum ferritin levels should be adequate to ensure effective erythropoiesis with ESA treatment. Iron administration The preferred route of iron administration in haemodialysis patients is intravenous (IV); in PD and CKD patients not on dialysis, it can be either IV or oral [1,2]. Oral iron is best absorbed when given without food; constipation, diarrhoea, nausea or abdominal pain limits compliance. In CKD patients, not only is iron therapy aimed at correcting iron deficiency, but it is also an adjuvant therapy in patients receiving ESAs, to achieve and maintain the Hb target. In these patients, iron stores may be near normal, but during ESA treatment, there may be insufficient immediately available iron to optimize ESA therapy. In this context, iron therapy significantly reduces ESA dose requirements. Iron, ESA and Hb target One of the hot topics in nephrology recently is the Hb target from treatment with ESAs and/or iron therapy. Given that cardiovascular morbidity and mortality are a major concern in CKD patients and lower Hb levels have been associated with poor outcomes, the most important trials in the field have been designed mainly focusing on this primary end point. The hypothesis that complete anaemia correction with ESAs would reduce the risk of death, cardiovascular and renal end points among patients with type 2 diabetes and CKD not undergoing dialysis was the rationale of the last trial in the field, the Trial to Reduce cardiovascular Events with Aranesp® Therapy (TREAT) [4]. More than 4000 patients were randomized to darbepoetin alfa to achieve an Hb level of 13 g/dL or to placebo (with rescue darbepoetin alfa for Hb level <9.0 g/dL). Criticisms of the TREAT trial The TREAT trial is the best trial in the field of anaemia published to date. However, at the early stages of the study, many criticisms were made regarding its design, either due to ethical issues (a much lower Hb value than recommended by guidelines was allowed in the control group) or because it was considered of little informative use (the comparison did not take into account the 'gold standard' of treatment, i.e. partial anaemia correction according to current guidelines (Hb 11-12 g/dL)) [1][2][3]5]. This study clearly demonstrated that the use of darbepoetin alfa in aiming at an Hb target of 13 g/dL in type 2 diabetic patients not undergoing dialysis does not reduce the risk of the two primary composite outcomes. Besides, secondary analyses showed a higher risk of strokes mainly in patients with a history of strokes and death due to cancer in patients with a history of malignancies, and venous and arterial thromboembolic events in patients randomized to the higher Hb target, associated with a significant reduction in cardiac revascularization procedures, number of transfusions and a mild improvement in quality of life. The interpretation of the TREAT results is complex [6]. The simplest explanation is that higher Hb levels are the cause of increased occurrence of strokes through an increase of blood viscosity and perhaps blood pressure (median diastolic blood pressure was slightly higher in the darbepoetin alfa group). However, in the Normal Hematocrit Study [7] and in the Correction of Hemoglobin and Outcomes in Renal Insufficiency (CHOIR) trial [8], higher achieved Hb levels were associated with fewer cardiovascular events in each study arm. This leads to the hypothesis that the lower the dose of ESAs, the better the outcome [7]. However, the selection bias of survivors may have a role: patients achieving higher Hb concentrations may be healthier and thus more responsive to treatment. A secondary analysis of the CHOIR study [9] clearly pointed out that high ESA doses may be related to increased cardiovascular events not related to high Hb levels. The link between high ESA dose and negative outcomes may be simply explained by the fact that patients with more comorbidities or those who are more inflamed are hyporesponsive to ESA treatment. High ESA dose may stimulate EPO receptors other than those controlling erythropoiesis. This could exacerbate some pleiotropic effects of ESAs on endothelial and muscular cells. ESAs cause thrombocytosis in those patients who are iron deficient. Erythropoiesis following ESA therapy may precipitate iron deficiency. This has been associated with increased platelet production and thus increased thrombotic risk. Therefore, high ESA dosage could cause cardiovascular events not only through high Hb levels. However, we should not misinterpret the association data for ESA doses. Using higher ESA dosage for achieving the same Hb levels (or even not achieving it) is a marker of comorbidities (inflamed patients reach lower Hb levels despite higher ESA dosages). In the TREAT study, the median monthly darbepoetin dose in the group randomized to darbepoetin and higher Hb level was rather high (176 μg; interquartile range, 104-305) compared to that used in everyday clinical practice in patients not on dialysis. The fact that the drug was administered once a month in the majority of the patients and above all some of them were not fully iron-replete may have contributed to this high dose requirement. In fact, patients with a transferrin saturation of even 15% were eligible for enrolment, and transferrin saturation and ferritin levels were measured quarterly; moreover, there was no protocol for iron administration, and only 43% of the patients received iron at baseline and 66.8% (14.8% IV) in the darbepoetin alfa group and 68.6% (20.4% IV) in the placebo group, during the follow-up trial. Are we going to change the way we treat our patients? In my opinion, important limitations inherent to the study design reduce the general applicability of TREAT results i4 F. Locatelli and do not support substantial changes in the way we manage anaemia in our patients. Literally, reading the intention-to-treat analysis of the TREAT, one could draw the misleading conclusion that we should treat CKD patients with ESA only if they have an Hb level below 9 g/dL that cannot be managed with blood transfusions [10]. This is further supported by the results of the secondary analyses (lower risk of stroke, thromboembolic events and death from malignancies in the 'placebo group' with lower Hb target range). Conversely, much less emphasis has been put on other secondary outcomes, i.e. a lower risk of transfusions and cardiac revascularization and a better quality of life in the patients randomized to the darbepoetin alfa and higher Hb level. Moreover, the 'placebo group' did not receive a true 'placebo' since 46% of the patients had at least one dose of ESA [10]. Even more importantly, despite a rescue value of 9 g/dL, achieved Hb values progressively increased during follow-up (from a median value of 10.4 g/dL at baseline to 11.2 g/dL at the end of the study, with a median value of 10.6 g/dL during follow-up). These achieved values are very close to the target range suggested by current guidelines [1][2][3]5]. This positive trend is against the common observation that CKD patients show a decrease in Hb values during the course of their disease, and this makes it hard for us to accept the TREAT study as a 'placebo' randomized controlled trial [10]. In addition to rescue treatment with darbepoetin alfa, the patients in the 'placebo' group were treated more frequently with IV iron (and blood transfusions) than the darbepoetin group. Given that many patients were relatively iron deficient at baseline, iron administration had been successful in many of them in obtaining and maintaining partial anaemia correction without the need for ESAs. However, transfusions cannot be considered as an alternative treatment for anaemia, and iron alone is not enough in the later stages of CKD, as strongly demonstrated by the pre-ESA era experience [11]. Conclusions The findings of the TREAT study underline the great importance of iron supplementation in correcting anaemia, although this should be a well-established approach in everyday clinical practice and has already been clearly indicated by current guidelines [1][2][3]5]. The risk-benefit of more transfusions should also be considered carefully, especially for patients who are potential candidates for transplantation. Finally, the upper safety limit for iron administered to patients in order to minimize, as much as possible, the ESA dose and the upper limit for ESA dosage for maintaining the target Hb range as suggested by the current guidelines are still open questions. Large prospective randomized trials are welcome for clarifying this very important clinical issue.
2016-05-12T22:15:10.714Z
2011-06-01T00:00:00.000
{ "year": 2011, "sha1": "9420deea6133180de2ca3a424de26587ae7856d5", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ckj/article-pdf/4/suppl_1/i3/1097673/sfr041.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9420deea6133180de2ca3a424de26587ae7856d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255781710
pes2o/s2orc
v3-fos-license
Differential expression of CXCR1 and commonly used reference genes in bovine milk somatic cells following experimental intramammary challenge Chemokine (C-X-C motif) receptor 1 (CXCR1 or IL-8RA) plays an important role in the bovine mammary gland immunity. Previous research indicated polymorphism c.980A > G in the CXCR1 gene to influence milk neutrophils and mastitis resistance. In the present study, four c.980AG heifers and four c.980GG heifers were experimentally infected with Staphylococcus chromogenes. RNA was isolated from milk somatic cells one hour before and 12 hours after the experimental intramammary challenge. Expression of CXCR1 and eight candidate reference genes (ACTB, B2M, H2A, HPRT1, RPS15A, SDHA, UBC and YWHAZ) was measured by reverse transcription quantitative real-time PCR (RT-qPCR). Differences in relative CXCR1 expression between c.980AG heifers and c.980GG heifers were studied and the effect of the experimental intramammary challenge on relative expression of CXCR1 and the candidate reference genes was analyzed. Relative expression of CXCR1 was not associated with polymorphism c.980A > G but was significantly upregulated following the experimental intramammary challenge. Additionally, differential expression was detected for B2M, H2A, HPRT1, SDHA and YWHAZ. This study reinforces the importance of CXCR1 in mammary gland immunity and demonstrates the potential effect of experimental intramammary challenge on expression of candidate reference genes in milk somatic cells. Background After invading the bovine mammary gland, pathogens can cause an intramammary infection (IMI) followed by an inflammatory response called mastitis. Neutrophils migrating from blood to milk play an important role in the mammary gland immunity [1]. Binding of cytokine interleukin-8 on chemokine (C-X-C motif ) receptor 1 (CXCR1) causes chemotaxis and enhances viability of bovine neutrophils [2,3]. Despite its important function, many single nucleotide polymorphisms (SNP) were detected in the CXCR1 gene [4,5]. Recently, we reported a higher milk neutrophil viability and lower likelihood of IMI by major mastitis pathogens (e.g. Staphylococcus aureus and Streptococcus uberis) in heifers with genotype CXCR1c.980AG compared to heifers with genotype CXCR1c.980GG [4,6]. Polymorphism c.980A > G causes the amino acid change p.Lys327Arg in the C-terminal region of the receptor potentially influencing interleukin 8 signal transduction. However, phenotypical differences could also be explained by linkage with SNPs in regulatory regions and an association between SNP c.980A > G and CXCR1 gene expression. To test this hypothesis, we isolated RNA from milk somatic cells isolated before and after an experimental challenge with Staphylococcus chromogenes. Next, differences in CXCR1 expression between c.980AG and c.980GG heifers were studied using reverse transcription quantitative real-time PCR (RT-qPCR). Additionally, the influence of the experimental intramammary challenge on expression of CXCR1 and commonly used reference genes was analyzed. Test animals This experiment has been approved by the ethical committee of the Faculty of Veterinary Medicine, Ghent University (EC2012/73). A blood sample was taken from all Holstein heifers (n = 20) of the commercial dairy herd of Ghent University (Biocenter Agri-Vet, Melle, Belgium). The whole coding region of CXCR1 was genotyped by direct sequencing as previously described [4]. Four heifers with genotype c.980AG and 4 heifers with genotype c.980GG were selected. Selected heifers were not siblings, had no history of clinical mastitis or other diseases, and were between 75 and 280 days in milk at the time of the experiment. Duplicate milk samples were taken four days and one hour before the experiment. Bacteriological culture was performed according to National Mastitis Council (NMC) guidelines [7]. All quarters of all heifers were culture-negative at that time. Quarter SCC was measured before and during the experiment in duplicate using a DeLaval cell counter (DCC, DeLaval International AB, Tumba, Sweden). Experimental challenge Data were available from a larger experimental infection study in which each heifer was inoculated briefly after the morning milking (8 a.m.) with two different strains of S. chromogenes, a strain of Staphylococcus fleurettii, and sterile phosphate buffered saline (PBS) in a splitudder design (one strain or PBS per individual quarter) to study differences in pathogenicity and immune response between coagulase-negative staphylococci. One S. chromogenes (S. chromogenes IM) strain originated from a chronically infected quarter [8] whereas the other (S. chromogenes TA) originated from the teat apex and was found to inhibit the growth of major pathogens in vitro [9]. The S. fleurettii strain originated from sawdust [8]. For each strain, 1*10 6 CFU in 5 mL PBS was inoculated using a sterile catheter (Vygon, Ecouen, France). The bacterial count was determined by incubating a tenfold serial dilution of a representative frozen aliquot 18 h before inoculation. Five mL of sterile PBS was inoculated in the fourth quarter (further referred to as neighboring quarters). For this research, additional milk samples of 600 mL were taken 1 h before and 12 h after inoculation from quarters (to be) inoculated with PBS or S. chromogenes IM (Figure 1). Cows were milked after sampling. Milk somatic cell isolation Samples were transported on ice to the laboratory where milk was divided equally between three 400-mL centrifuge bottles, diluted 50% (vol/vol) with cold PBS, and centrifuged at 1500 × g for 15 min at 4°C in a fixed angle rotor. The supernatant was discarded. The three milk somatic cell pellets were resuspended in a total of 40 mL PBS, divided between two 50-mL Falcon tubes and washed three times with 10 mL cold PBS (centrifugation at 200 x g for 10 min at 4°C). The final milk somatic cell pellets were suspended in 1 mL of RPMI 1640 (Gibco Brl., Scotland, UK) supplemented with 1% BSA (Merck KGaA, Darmstadt, Germany). Twenty μL of the suspension was diluted with 380 μl low SCC milk (SCC < 50 cells/mL) and measured with a DeLaval cell counter to estimate the cell concentration using following formula; SCCsample (in cells/μl) ¼ 400ÃSCC mix −380ÃSCC milk 20 . Approximately 5*10 6 cells were pipetted in a 2 mL test tube, pelleted by centrifugation at 16,100 x g for 1 min at 4°C and resuspended in 1 mL TRI Reagent Solution (Ambion, Austin, TX). If less cells were available, all of the cell suspension was used. Samples were frozen and stored at -20°C for 8-10 months. RNA extraction and cDNA synthesis RNA was isolated following the manufacturer's instructions of TRI Reagent Solution (Ambion, Austin, TX). Genomic DNA was removed by adding 4 μL RQ1 DNase (0.5 U/μL, Promega, Leiden, Netherlands) and 2.7 μL RQ1 DNase 10X Reaction Buffer (Promega) followed by incubation for 30 min at 37°C. The reaction was terminated by adding 3 μL RQ1 DNase Stop Solution (Promega) followed by incubation for 10 min at 65°C. The RNA was purified by spin-column centrifugation (Amicon Ultra-0.5 centrifugal filter device, Merck Millipore, Billerica, MA) to approximately 15 μL. Its concentration and purity was estimated using a ND-1000 spectrophotometer (NanoDrop, Wilmington, NC). RNA degradation was analyzed by gel electrophoresis of a representative sample. Due to low yield in many samples, an additional PCR assay was designed to analyze cDNA integrity of all samples (see further). PCR assay to assess cDNA integrity Complementary DNA integrity was assessed using two 4-primer PCR assays multiplying fragments of approximately 100, 500 and 900 bps of YWHAZ and CXCR1, respectively. For both assays, a forward and 3 reverse PCR primers were designed using Primer3Plus [10] and synthesized by IDT. Sequences are shown in Table 1. Regions forming potential secondary structures were identified with Mfold [11] and avoided. Specificity of binding of the primers was analyzed using NCBI BLAST [12]. For both assays, a PCR mix, in a total volume of 10 μL, containing 2 μL 5x diluted cDNA, 1.0 μL 10 × FastStart Taq DNA Polymerase Buffer (Roche Applied Science), 0.3 μL dNTP Mix (10 mM each; BIOLINE), 1 μL forward primer (5 μM), 0.3 μL reverse primer 1 (5 μM), 0.3 μL reverse primer 2 (5 μM), 0.6 μL reverse primer 3 (5 μM) and 0.1 μL Taq DNA Polymerase (5 U/μl, Roche Applied Science) was made. The PCR program consisted of an initiation step of 4 min at 95°C followed by 40 amplification cycles (denaturation for 45 s at 95°C, annealing for 45 s at the optimal annealing temperature and extension for 1 min 30 s at 72°C) and a final 4-min elongation step at 72°C. The optimal annealing temperature for the YWHAZ and CXCR1 assay (60°C and 65°C, respectively) were determined experimentally. Complementary DNA amplification was examined by electrophoresis on ethidium bromide-stained agarose (0.8%) gel (150 V, 25 min). The cDNA integrity was considered excellent, sufficient or insufficient if, respectively, 3, 2 or 1 bands were visible in both assays ( Figure 2). RT-qPCR Ten candidate reference genes were selected based upon previous research [13][14][15]: ACTB, B2M, H2A, HPRT1, PPP1R11, RPS15A, SDHA, TBP, UBC, and YWHAZ. Primers were ordered from IDT. Gene, primer and amplicon information is listed in Table 2. A PCR mix of 10 μL containing 5 μL 2x SYBR Green I Master Mix (Roche Diagnostics, Basel, Switzerland), 1 μL forward primer (5 μM), 1 μL reverse primer (5 μM) and 2 μL cDNA sample was made. The PCR program consisted of an initiation step of 3 min at 95°C, followed by 40 amplification cycles (denaturation for 30 s at 95°C, annealing-elongation for 40 s at the optimal annealing temperature and detection of fluorescent signals generated by SYBR Green I binding to dsDNA). Samples were heated from 75°C to 95°C in 0.5°C increments per 5 s while continuously measuring fluorescence. The generated melt curve was used to confirm a single gene-specific peak and to detect primer/dimer formation. Optimal annealing temperatures were determined experimentally by gradient qPCR on a 4-fold serial dilution until 1/1024 of pooled cDNA of all samples. All reactions were performed in duplicate. In each run, the serial dilution and a no template control were included to analyze calibration curves, PCR efficiency (E) and squared correlation coefficient (r 2 ) and check for PCR contamination. All qPCRs were performed in PCR strips (Bio-Rad, Hercules, CA) using a CFX96 Touch™ Real-Time PCR Detection System (Bio-Rad). Quantification cycles (Cqs) were analyzed with CFX Manager™ Software v3.1 (Bio-Rad). The raw Cq values were converted to quantiles (Q) using following formula; Q = (1 + E) (CqS -CqL) with E = PCR efficiency, CqS = Cq value of the sample and CqL = lowest Cq value of all samples. Analysis of gene expression stability Stability of the different candidate reference genes were analyzed using Normfinder (version 0.953, [16]) Excel Add-In. Samples were grouped as (1) all samples 1 h before inoculation, (2) samples from quarters inoculated with PBS 12 h after inoculation, and (3) samples from quarters inoculated with S. chromogenes IM 12 h after inoculation. Normfinder estimates an expression stability measure (ρ) per candidate reference gene based on the overall variation of the expression and the variation of the expression between the subgroups [16]. Data analysis Differences in gene expression between CXCR1 genotype and sample subgroups were further studied using SAS 9.4 (SAS Institute Inc., NC, USA). First, expression of B2M, CXCR1, H2A, HPRT1, SDHA and YWHAZ were normalized by the geometric mean of the three most stable genes (ACTB, RPS15A and UBC; see further). Expression of ACTB, RPS15A and UBC were normalized to the geometric mean of the other two most stable genes. Data were log transformed to obtain a normalized distribution. Secondly, a linear mixed regression model was fit with relative expression as outcome variable and heifer and quarter as random effect to correct for clustering of quarters within cows and two observations per quarter, respectively (PROC MIXED, SAS 9.4). Sample subgroup (1, 2 and 3) was added as fixed effect. In the model for CXCR1, genotype (c.980AG and c.980GG) and the interaction between sample subgroup and genotype were also tested. Statistical significance was assessed at P ≤ 0.05. RT-qPCR and gene expression stability The gradient qPCR of PPP1R11 and TBP on the 1/1024 dilution of the pooled cDNA showed weak fluorescent signals and melt peaks indicating low expression of these genes in the samples. They were not further tested as reference genes. The calibration curves of the remaining candidate reference genes and CXCR1 demonstrated PCR efficiencies close to 100% and correlation coefficients close to 1 indicating good assay performance. Median Cq values were low. The SD of the Cq values of replicate samples were limited demonstrating good repeatability (Table 3). Normfinder identified UBC, RPS15A and ACTB as the most stable genes based on their low inter-and intragroup variation in expression (Figure 3). Effect of genotype and experimental challenge on gene expression Genotype and the interaction between genotype and sample subgroup were non-significant (P = 0.55 and 0.26, respectively) and removed from the model for relative expression of CXCR1. Relative expression of CXCR1 was significantly higher in milk somatic cells from the infected and neighboring quarters 12 h after inoculation compared to milk somatic cells isolated 1 h before inoculation (both P < 0.01). Additionally, relative expression of B2M, H2A, HPRT1, SDHA and YWHAZ differed significantly between milk somatic cells from the infected quarters 12 h after inoculation and milk somatic cells isolated 1 h before inoculation (P < 0.05). Furthermore, relative expression of B2M and YWHAZ differed significantly between milk somatic cells from the neighboring quarters 12 h after inoculation and milk somatic cells isolated 1 h before inoculation. Relative expression of the three most stable genes (ACTB, RPS15A and UBC) did not differ significantly between the subgroups (Table 4). Discussion Gene expression analysis in experimentally infected and healthy quarters allows for identification of differentially expressed genes and pathways. Quantitative real-time PCR in milk somatic cells isolated prior to inoculation and at different stages of experimental infection enables a detailed follow-up of the host response. In this study, we analyzed associations between SNP CXCR1c.980A > G and CXCR1 expression in milk somatic cells and studied the influence of an experimental intramammary challenge with Intergroup variation All quarters 1h before inoculation Neighboring quarters 12h after inoculation Infected quarters 12h after inoculation Figure 3 Gene expression stability of candidate reference genes in milk somatic cells analysed using Normfinder software. Intergroup variation (+ intragroup variation) of expression of 8 candidate reference genes in milk somatic cells isolated from quarters inoculated with PBS (n = 8) or Staphylococcus chromogenes (n = 8) of 8 dairy heifers. Candidate reference genes are ranked on gene expression stability (ρ) calculated using Normfinder [16] with most stable genes on the right side (smallest inter-and intragroup variation). S. chromogenes on expression of CXCR1 and eight commonly used reference genes. Because IMI in one quarter can influence gene expression and immunity in neighboring quarters [17,18], we compared values before and after challenge rather than infected and non-infected quarters. Compared to biopsies, milk somatic cells allow for easy resampling but yield less RNA, especially if the SCC is low [19]. Because of the low yield in some samples, we opted to isolate RNA from all milk somatic cells rather than a subpopulation (e.g. neutrophils). RNA integrity can influence RT-qPCR results but is not easy to asses [20]. Besides analyzing rRNA integrity of a representative sample using gel electrophoresis, we designed two four-primer PCR assays amplifying three fragments of YWHAZ and CXCR1 to test cDNA integrity of all samples. The assays are based on the fact that if integrity is too low, amplification of large fragments of approximately 500 and 900 bp will be affected. Latter fragments are more than 4 times as long as the amplicon of the target gene in the qPCR (118 bp). Polymorphism c.980A > G was not associated with CXCR1 expression in milk somatic cells. Yet, relative CXCR1 expression increased in milk somatic cells isolated from infected quarters which corresponds well with in vitro research showing increased CXCR1 expression in blood neutrophils after in vitro LPS challenge [21]. To a lesser extent, transcription also increased in milk somatic cells from neighboring quarters. This might be due to cross-talk with the infected quarters or due to the inoculation of PBS. Although SCC increased little in the neighboring quarters, we cannot exclude a minimal inflammation caused by the insertion of the catheter, the PBS or both. The much higher increase in SCC and CXCR1 expression in the infected compared to the neighboring quarters suggests inflammation in the infected quarters to be mainly due to experimental IMI. The experimental challenge had a significant effect on the relative expression of 5 out of 8 candidate reference genes. Important to mention is that candidate reference genes were selected based on a stable expression in other studies [13][14][15]. Reference genes in experimental infection studies should be stably expressed and unaffected by IMI. Validation of the reference gene to normalize gene expression data is not always published [20]. Although normalization to a single reference gene can cause relative large errors [22], it is often practiced [17,23]. If the expression of this single reference gene is affected by IMI, certain genes might be falsely identified as up-or downregulated whereas truly up-or downregulated genes might be missed. Conclusion In conclusion, CXCR1 expression in milk somatic cells was not associated with SNP c.980A > G but was upregulated following experimental IMI with S. chromogenes. Additionally, differential expression was observed for candidate fererence genes B2M, H2A, HPRT1, SDHA and YWHAZ. The effect of intramammary challenge on expression of reference genes should be tested and reported in future studies on gene expression in milk somatic cells. Competing interests The authors declare that they have no competing interests. Authors' contributions JV and SDV designed the experiment. JV and MVP optimized all protocols. JV collected milk samples, extracted RNA, performed the qPCR and analyzed data Table 4 Linear mixed models describing the difference in gene expression in milk somatic cells before and after experimental challenge Two samples showed insufficient cDNA integrity and were therefore removed from the dataset. c Regression coefficient. d Standard error. Heifer and quarter were added as random effect to correct for clustering of quarters within cows and two observations per quarter, respectively. with help of MVP. JV drafted the manuscript. MVP, LP and SDV gave critical comments on the manuscript. All authors read and approved the final manuscript.
2023-01-14T15:04:55.897Z
2015-04-22T00:00:00.000
{ "year": 2015, "sha1": "281edd4bea28b3959011502f1d5401444f9cfc1a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12863-015-0197-9", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "281edd4bea28b3959011502f1d5401444f9cfc1a", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [] }
259166705
pes2o/s2orc
v3-fos-license
Anti‐apoptotic protein BCL‐XL as a therapeutic vulnerability in gastric cancer Abstract Background New therapeutic targets are needed to improve the outcomes for gastric cancer (GC) patients with advanced disease. Evasion of programmed cell death (apoptosis) is a hallmark of cancer cells and direct induction of apoptosis by targeting the pro‐survival BCL2 family proteins represents a promising therapeutic strategy for cancer treatment. Therefore, understanding the molecular mechanisms underpinning cancer cell survival could provide a molecular basis for potential therapeutic interventions. Method Here we explored the role of BCL2L1 and the encoded anti‐apoptotic BCL‐XL in GC. Using Droplet Digital PCR (ddPCR) technology to investigate the DNA amplification of BCL2L1 in GC samples and GC cell lines, the sensitivity of GC cell lines to selective BCL‐XL inhibitors A1155463 and A1331852, pan‐inhibitor ABT‐263, and VHL‐based PROTAC‐BCL‐XL was analyzed using (CellTiter‐Glo) CTG assay in vitro. Western Blot (WB) was used to detect the protein expression of BCL2 family members in GC cell lines and the manner in which PROTAC‐BCL‐XL kills GC cells. Co‐immunoprecipitation (Co‐IP) was used to investigate the mechanism of A1331852 and ABT‐263 kills GC cell lines. DDPCR, WB, and real‐time PCR (RTPCR) were used to investigate the correlation between DNA, RNA, protein levels, and drug activity. Results The functional assay showed that a subset of GC cell lines relies on BCL‐XL for survival. In gastric cancer cell lines, BCL‐XL inhibitors A1155463 and A1331852 are more sensitive than the pan BCL2 family inhibitor ABT‐263, indicating that ABT‐263 is not an optimal inhibitor of BCL‐XL. VHL‐based PROTAC‐BCL‐XL DT2216 appears to be active in GC cells. DT2216 induces apoptosis of gastric cancer cells in a time‐ and dose‐dependent manner through the proteasome pathway. Statistical analysis showed that the BCL‐XL protein level predicts the response of GC cells to BCL‐XL targeting therapy and BCL2L1 gene CNVs do not reliably predict BCL‐XL expression. Conclusion We identified BCL‐XL as a promising therapeutic target in a subset of GC cases with high levels of BCL‐XL protein expression. Functionally, we demonstrated that both selective BCL‐XL inhibitors and VHL‐based PROTAC BCL‐XL can potently kill GC cells that are reliant on BCL‐XL for survival. However, we found that BCL2L1 copy number variations (CNVs) cannot reliably predict BCL‐XL expression, but the BCL‐XL protein level serves as a useful biomarker for predicting the sensitivity of GC cells to BCL‐XL‐targeting compounds. Taken together, our study pinpointed BCL‐XL as potential druggable target for specific subsets of GC. | INTRODUC TI ON The current standard-of-care therapies with surgery and adjuvant chemo/radiotherapy for patients with GC have markedly improved their outcomes, especially for those in the early stages of the disease. However, the overall clinical outcome for patients with advanced GC remains poor, with a median survival time of only 12-15 months and a 5-year OS of approximately 5-25%. 1 The introduction of targeted therapy and immunotherapy that include the anti-HER2 agents (e.g. trastuzumab, trastuzumab deruxtecan), anti-VEGFR2 ramucirumab and anti-PD-1 pembrolizumab have had some success in patients with advanced GC. 2,3 However, these therapies are only effective in subgroups of GC patients who possess certain biomarkers, such as HER2 overexpression/amplification, PD-L1/MSI status etc. Notably, the high rate and rapid emergence of resistance further limit the use of the treatments. 2,3 Therefore, there is an unmet need to identify new therapeutic vulnerabilities or targets for GC interventions. Programmed cell death pathways, including apoptosis, serve as natural barriers to cancer pathogenesis. 4 The essential executioner proteins in the intrinsic (mitochondrial or BCL2-regulated) pathway to apoptosis are BAX and BAK; once these become activated, they drive mitochondrial outer membrane permeabilization (MOMP), committing the cell irreversibly to apoptosis. In the absence of apoptosis-inducing signals, BAX/BAK are kept in check by the prosurvival proteins including BCL2, BCL-XL, MCL1, BCLw and BFL-1/ BCL2A1. 5 In many cancers, the propensity to undergo apoptosis is impaired because of sustained pro-survival signaling. 6,7 Accordingly, pharmacologic inhibitors targeting pro-survival BCL2 proteins have been developed to induce apoptosis in cancer cells. 8,9 Various malignancies show recurrent genetic amplifications affecting pro-survival members of this family, particularly MCL1 and BCL2L1, 10 but there are also studies indicating that BCL2L1 copy number variations (CNVs) are not associated with corresponding expression levels 11 and that BCL2L1 gain/amplification may not exert the same biological function as overexpression. 12 Amplification of BCL2L1 is reported in 11 (10.7%) of the 103 GC cases analyzed by aCGH, while the putative amplification rate of BCL2L1 using the GISTIC algorithm with the TCGA dataset is 2.7% (6/220) (www.cbiop ortal.org). 13 Using siRNA targeting BCL-XL or ABT737, a BH3 mimetic compound inhibiting BCL2, BCL-XL and BCLw, showed that BCL2L1-amplified GC cell lines are more susceptible to this inhibitor than the BCL2L1-nonamplified cells. However, it was later reported that the BCL2L1-amplified GC cell lines MNK-28 and MKN-74 used in this study were cross-contaminated. Given the elusive role of BCL2L1 in GC, we were therefore determined to characterize the role of BCL2L1 and the encoded anti-apoptotic BCL-XL in GC cell survival and to look for biomarkers predicting the response of GC cells to BCL-XL-targeting therapy. Here, we provide evidence that BCL-XL is a promising therapeu- | Lentivirus production and infection The lentivirus packaging plasmids pMDLg/pRRE, pRSV-Rev and pCMV VSV-G were transiently transfected into HEK293T cells with the constructs of interest using Lipofectamine™ 3000 Transfection Reagent (Thermo, L3000015). Supernatant containing infectious virus particles was harvested 48 h later. A second viral harvest was made following a further 24 h incubation with fresh medium. Viruscontaining supernatant was filtered through a 0.45 μm filter and stored at 4°C or −80°C until used. Typically, GC cells were seeded into 6-well plates at 5000,00 cells/ well. An equivalent volume of virus-containing culture medium was added along with polybrene (Sigma) to a final concentration of 5 μg/mL. (mCherry) and the pKLV-gRNAs targeting BAX and BAK (BFP). MCherry + BFP + cells were sorted into 96-well plates at one cell per well using a BD FACSAriaTMII flow cytometer. Mutation of the targeted DNA was then confirmed by targeted PCR followed by Sanger DNA sequencing. Single cell clones with frameshift mutations in both BAX and BAK were used for this study. The inducible guide RNA vector FgH1tUTG was used to delete BCL-XL. The sgRNA sequences and primer sequences are described in Supplementary Table S1. | Cell viability assays To test the response of GC cell lines to BH3-mimetics, cells were seeded in 96-well plates at 3 × 10 3 cells/well and treated with ti- | Droplet digital PCR (ddPCR) Genomic DNA from both GC cell lines and primary samples was | RT-PCR Total RNA was extracted using Trizol reagent (Thermo Fisher Scientific). | A subset of GC cell lines relies on BCL-XL for survival To study the role of BCL-XL in GC cell survival, we collected 7 different human GC cell lines and treated them with the two selective BCL-XL inhibitors A1155463 14 and A1331852, 15 as well as with ABT-263, 16 which targets BCL2, BCL-XL and BCLw, and is in clinical trials. Interestingly, 3 out of 7 GC cell lines could be readily killed by the BCL-XL-selective inhibitors A1155463 and A1331852 ( Figure 1A). Of note, both selective BCL-XL inhibitors showed higher rates of killing these GC cell lines than ABT-263 ( Figure 1A). Importantly, we confirmed that the effect exerted by these inhibitors is via apoptosis as genetic removal of the downstream pro-apoptotic executioners BAX and BAK abolished the killing of these GC cell lines by these BH3mimetic drugs ( Figure 1B). The role of BCL-XL in maintaining GC cell survival was further confirmed by genetic deletion of BCL-XL using the inducible CRISPR/Cas9 system in 23 132/87 cells ( Figure 1C). 17 | ABT-263 is not an optimal inhibitor of BCL-XL BCL-XL plays an important role in cancer pathogenesis and in mediating drug resistance to standard chemotherapy and targeted therapies, which underpins the ongoing clinical trials of combination therapy testing ABT-263 in solid tumors and lymphoid malignancies. 9 Intriguingly, we only observed subtle activity of ABT-263 in the BCL-XL-dependent GC cell lines ( Figure 1A). To understand the low activity of ABT-263 in inhibiting BCL-XL even though it exhibited comparable binding affinity to BCL2, BCL-XL and BCLw in vitro, 16 we compared the ability of A1331852 and ABT-263 to release the proapoptotic initiator of apoptosis, BIM, from its restraint by BCL-XL. Consistent with the functional data, ABT-263 was much less effective at displacing BIM from BCL-XL than A1331852 ( Figure 1D). Moreover, we found that ABT-263 was much more potent at displacing BIM from BCL2 than from BCL-XL, confirming that ABT-263 is a more potent inhibitor of BCL2 than BCL-XL ( Figure 1D). | VHL-based PROTAC-BCL-XL DT2216 appears to be active in GC cells Despite the superior ability of small molecular inhibitors of BCL-XL to induce killing in BCL-XL-reliant GC cells, the clinical utility of direct BCL-XL inhibitors is largely limited by their on-target and dose-limiting platelet toxicity. 18,19 The recent development of proteolysis targeting chimeras (PROTACs) to induce degradation of targeted proteins has gained momentum, 20 which has changed the landscape of drug development. Identification of the most promising protein targets to be degraded and a ligase that is highly expressed in tumors compare with normal tissues ensures augmented efficacy and reduced toxicity. Accordingly, several BCL-XL specific PROTACs have been reported with reduced toxicity in platelets but potent antitumor activity in hemotopoeitic malignancies [20][21][22] and small-cell lung cancer. 23 Interestingly, a recent study using in silico analyses showed that the E3 ligase MDM2 is highly expressed in GC cells and concluded that BCL-XL PROTACs coupled with the MDM2 ligase represents a potential therapeutic approach in GC. 24,25 We therefore firstly determined the expression of MDM2 in our GC cell lines together alongside three blood cancer cell lines. Unexpectedly, heterogenous expression of MDM2 was observed in GC cell lines, and the abundance of MDM2 was even lower in three BCL-XL-dependent GC cell lines: 23132/87, NCI-N87 and SNU216 (Figure 2A). Instead, we detected ubiquitously high expression of the E3 ligase VHL in all the GC cell lines, which is comparable to that in the blood cancer cell lines (Figure 2A). More importantly, only low levels of both MDM2 and VHL were detected in platelets ( Figure 2B). The abundant expression of VHL in GC cells but not in platelets indicated that BCL-XL-PROTACs coupled with VHL are likely to achieve anti-tumor effects with only low toxicity in platelets. To test this, we detected the activity of a VHL-based BCL-XL-PROTAC DT2216 in our panel of GC cell lines. 20 Notably, DT2216 induced both dose-and time-dependent killing of all three BCL-XL-dependent GC cell lines (23 132/87, SNU216 and NCI-N87) ( Figure 2C). Importantly, we confirmed that the effect caused by DT2216 is predominantly BAX/BAK dependent, although minor BAX/BAK-independent killing was observed at a very high concentration (10 μM) ( Figure 2D, Supplementary Figure S1A). To validate the mechanism of action of DT2216 against GC cells, we used BAX/BAK deficient SNU216 cells as a model system, as loss of BAX/BAK excludes the nondirect degradation of BCL2 family proteins due to the activation of downstream caspase cascade. 26 We found that DT2216 caused a dose-and time-dependent decrease of BCL-XL protein ( Figure 2E,F, Supplementary Figure S1B). In addition, we found that pre-incubation of SNU216 cells with the proteasome inhibitor MG-132 prevented the degradation of BCL-XL induced by DT2216 in the GC cell lines ( Figure 2G). Consistent with the previous studies, 20,22 no BCL2 degradation was detected, but we also observed a reduction of MCL1 protein level at higher concentrations(10 μM) ( Figure 2E). Although previous studies suggested that the activation of caspase-3 upon DT2216 treatment contributes to the reduction of MCL1 protein, 20,22 we excluded this mechanism in our studies as no cleaved caspase-3 was detected at all doses tested using BAX/BAK deficient cells (data not shown). This therefore needs to be further addressed by future studies. Collectively, these | BCL-XL protein level predicts the response of GC cells to BCL-XL targeting therapy It was previously reported that GC cell lines with BCL2L1 gene amplification were more susceptible to ABT737 treatment, but the study was only conducted in two BCL2L1 gene-amplified GC cell lines, MNK-28 and MKN-74, and a cross-contamination between these two cell lines was later reported. Given the limited number of cell lines used in the previous study, we examined whether BCL2L1 CNVs predict the susceptibility of GC cells to the BCL-XLtargeting drugs using our panel of GC cell lines. To address this, we determined BCL2L1 CNVs as well as the mRNA and protein levels of the anti-apoptotic protein BCL-XL in our 7 GC cell lines ( Figure 3A). Surprisingly, although there was a significant correlation between BCL-XL mRNA and BCL-XL protein expression, we did not detect a correlation between BCL2L1 CNVs and BCL-XL mRNA or BCL-XL protein expression ( Figure 3B). Of note, we found that GC cell lines expressing high levels of BCL-XL protein were more susceptible to BCL-XL inhibition, with no significant association between BCL2L1 CNVs and response to the BCL-XL inhibitors detected ( Figure 3C). sensitivity, but no significant correlation was detected ( Figure 3D,E). | BCL2L1 gene CNVs do not reliably predict BCL-XL expression Our results in GC cell lines suggested that BCL-XL expression is controlled at multiple levels, which cannot be predicted solely by its gene amplification. We therefore further addressed this question using primary samples derived from GC patients. Paired normal and tumor tissues from 18 GC patients were collected and targeted ddPCR was performed to determine BCL2L1 CNVs. Consistent with the previous studies, 10.5% (2/18) of the GC cases studied showed BCL2L1 gene amplification (≥ three copies). Although both samples with BCL2L1 gene amplifications (GC#1 and #2) showed relatively high levels of BCL-XL protein expression, similarly high levels of BCL-XL were also detected in another 5 tumor samples that had no BCL2L1 gene amplification (GC#3-#7) and no correlation between BCL2L1 CNVs and BCL-XL protein levels was detected in these samples ( Figure 4A). Surprisingly, 6 of the 18 GC cases with no BCL2L1 | DISCUSS ION The main focus of our study was to elucidate the role of BCL2L1 and the encoded anti-apoptotic BCL-XL in GC cell survival and to look for biomarkers predicting the response of GC cells to BCL-XL-targeting therapy. Using selective BCL-XL inhibitors (Figure 1) as well as a proteolysis targeting chimera to induce BCL-XL degradation ( Figure 2) and a genetic approach (Figure 1), we showed that a subset of GC cell lines (3/7) requires BCL-XL for survival and that BCL-XL protein levels, but not BCL2L1 gene CNVs, serve as a useful biomarker predicting sensitivity of GC cells to BCL-XL-targeting drugs ( Figure 3). Amplification of the BCL2L1 gene has been implicated in promoting aberrant survival of malignant cells in various cancers, including GC. 10,13 However, unlike in the case of CNVs of the MCL1 gene, 11 the relationship of BCL2L1 CNVs and BCL-XL protein expression has been elusive. Early studies in non-small-cell lung cancer and ovarian cancer showed that BCL2L1 CNVs are not associated with high levels of BCL-XL protein and that BCL2L1 gene gain/amplification may not exert the same biological function as overexpression. 11,12 Here, using GC cell lines and primary GC samples, we showed that BCL2L1 CNVs do not reliably predict high levels of BCL-XL protein expression (Figures 3 and 4), suggesting the regulation of BCL-XL expression at multiple levels in these cancer cells. Accordingly, the BCL-XL protein level, but not BCL2L1 CNVs, predicts sensitivity of GC cells to BCL-XL targeting compounds. Given the central role of BCL-XL in maintaining cancer cell survival and mediating drug resistance in solid tumors, 30 (Navitoclax) in combination with chemotherapy or targeted anticancer therapies is currently being evaluated in multiple clinical trials. 31,32 However, although ABT-263 showed high binding affinity to both BCL2 and BCL-XL in vitro, 16 we only observed subtle activity of ABT-263 in the BCL-XL-dependent GC cell lines. Our mechanistic study showed that ABT-263 is a more potent inhibitor of BCL2 than BCL-XL ( Figure 1). This is in consistent with the previous study in human non-Hodgkin lymphomas, which showed that high expression of BCL2 but not BCL-XL predicted sensitivity to ABT-263. 33 These data suggested that effective targeting of BCL-XL using highly specific inhibitors, rather than BH3 mimetic drugs targeting not only BCL-XL but also BCL-2 (and BCL-W), may be required to exert optimal anti-tumor effects in clinical studies. Apart from the sub-optimal activity of ABT-263 in inhibiting BCL-XL, the on-target and dose-limiting toxicity in platelets of direct targeting of BCL-XL further limited their clinical utility thus far. 18,19 Strategies that restrict the action of BCL-XL inhibitors to tumor cells can potentially reduce the on-target toxicity in platelets and thereby yield an enhanced therapeutic index. 20,[34][35][36] Here, we found that E3 ligase VHL, but not MDM2, is highly expressed in GC cells with minimal expression in platelets, suggesting that BCL-XL-PROTAC coupled with VHL may achieve potent anti-tumor efficacy with reduced toxicity. Indeed, our in vitro studies with the VHL-based PROTAC BCL-XL DT2216 showed high activity in inducing apoptosis in BCL-XL-dependent GC cells via proteasome mediated degradation of BCL-XL, achieving an effect similar to the selective BCL-XL inhibitor A1331852. However, the anti-tumor effect of DT2216 with spared activity on platelets needs to be further investigated using in vivo GC models. In summary, our study highlighted the importance of understanding the molecular mechanisms underpinning cancer cell survival and identified BCL-XL as a promising therapeutic target in GC cases with high levels of BCL-XL protein expression. Development of strategies such as PROTAC-based and antibody-drug conjugates that restrict the action of BCL-XL targeting BH3 mimetics to tumor cells holds promise for the treatment of GC patients with advanced diseases. AUTH O R CO NTR I B UTI O N S YMW, LPZ, JNG and RG designed the study; DBZ provided the primary GC samples and reviewed the paper; YMW, ZLP, CW, GMX, XJY and MYL performed experiments; MJL analyzed the data; ZFL and SYR collected the primary GC samples used in this study. This study was supervised by DBZ, JNG and RG. ACK N OWLED G M ENTS We thank Andreas Strasser from the Walter and Eliza Hall Institute of Medical Research for editing the manuscript. CO N FLI C T O F I NTER E S T S TATEM ENT The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Ran Gao and Jia-Nan Gong are Editorial Board member of AMEM and a co-author of this article. To minimize bias, they were excluded from all editorial decision-making related to the acceptance of this article for publication.
2023-06-06T06:17:50.336Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "76dd3e99a5e9d8d15d2a0ce578e5d9088f16b236", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/ame2.12330", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "74ad257d600af5b24e71267e6391057e4635e118", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119258434
pes2o/s2orc
v3-fos-license
The 2nd order coherence of superradiance from a Bose--Einstein condensate We have measured the 2-particle correlation function of atoms from a Bose--Einstein condensate participating in a superradiance process, which directly reflects the 2nd order coherence of the emitted light. We compare this correlation function with that of atoms undergoing stimulated emission. Whereas the stimulated process produces correlations resembling those of a coherent state, we find that superradiance, even in the presence of strong gain, shows a correlation function close to that of a thermal state, just as for ordinary spontaneous emission. Ever since the publication of Dicke's 1954 paper [1], the problem of the collective emission of radiation has occupied many researchers in the fields of light scattering, lasers and quantum optics.Collective emission is characterized by a rate of emission which is strongly modified compared to that of the individual atoms [2].It occurs in many different contexts: hot gases, cold gases, solids and even planetary and astrophysical environments [3].The case of an enhanced rate of emission, originally dubbed superradiance, is closely connected to stimulated emission and gain, and as such resembles laser emission [4].Lasers are typically characterized by high phase coherence but also by a stable intensity, corresponding to a Poissonian noise, or a flat 2nd order correlation function [5].Here we present measurements showing that the coherence properties of superradiance, when it occurs in an ultracold gas and despite strong amplified emission, are much closer to those of a thermal state, with super-Poissonian intensity noise. Research has shown that the details of collective emission depend on many parameters such as pumping configuration, dephasing and relaxation processes, sample geometry, the presence of a cavity, etc. and, as a result, a complex nomenclature has evolved including the terms superradiance, superfluorescence, amplified spontaneous emission, mirrorless lasing, and random lasing [2,4,[6][7][8][9], the distinctions among which we will not attempt to summarize here.The problem has recently seen renewed interest in the field of cold atoms [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25].This is partly because cold atoms provide a reproducible, easily characterized ensemble in which Doppler broadening effects are small and relaxation is generally limited to spontaneous emission.Most cold atom experiments differ in an important way from the archetypal situation first envisioned by Dicke: instead of creating an ensemble of excited atoms at a well defined time and then allowing this ensemble to evolve freely, the sample is typically pumped during a period long compared to the relaxation time and emission lasts essentially only as long as the pumping.The authors of reference [10] however, have argued that there is a close analogy to the Dicke problem, and we will follow them in designating this process as superradiance. In the literature on superradiance there has been relatively little discussion about the coherence and correlation properties of the light.The theoretical treatments we are aware of show that the coherence of collective emission can be quite complicated, but does not resemble that of a laser [2,13,20,[26][27][28].These results, however, were obtained for simple models that do not include all parameters relevant to laboratory experiments.Experimentally, a study performed on Rydberg atoms coupled to a millimeter-wave cavity [29] showed a thermal mode occupation, and an experiment in a cold atomic vapor in free space [24] observed a non-flat 2nd order correlation function.In the present work, we show that even if the initial atomic state is a Bose-Einstein condensate (BEC), the 2nd order correlation function looks thermal rather than coherent. Such behavior, which may seem counter-intuitive, can be understood by describing superradiance as a four wave mixing process between two matter waves and two electromagnetic waves.The initial state consists of a condensate, a coherent optical pump beam, and empty modes for the scattered atoms and the scattered photons.If we make the approximation that the condensate and the pump beam are not depleted and can be treated as classical fields, the matter-radiation interaction Hamiltonian is given by: where â † at,i (â at,i ) and â † ph,i (â ph,i ) denote atom and photon creation (annihilation) operators for a specific pair of momenta i fixed by energy and momentum conservation and χ i is a coupling constant.Textbooks [30] show that, starting from an input vacuum state, this Hamiltonian leads to a product of twomode squeezed states.When one traces over one of the two modes, α = {at, i} or {ph, i}, the remaining mode β has a thermal occupation with a normalized 2-particle or 2nd order correlator whereas it is unity for a laser.The problem has also been treated for four wave mixing of matter waves [31].We emphasize that, when starting from initially empty modes, the occupation remains thermal regardless of the gain.In the experiment, we start from initially nearly motionless atoms of a BEC and observe their recoil upon photon emission.To the extent that each recoil corresponds to the emission of a single photon, we can obtain essentially the same information about the radiation from such measurements as by observing it directly.In doing this, we are following the approach pioneered in experiments such as [10,29] and followed by many others, which uses highly developed atom detection and imaging techniques to glean most of the experimental information about the process.We are able to make time-integrated measurements of the emission, resolved in transverse and longitudinal momentum as well as in polarization, and reconstruct the 2-particle correlation function of the recoiling atoms, or equivalently the 2nd order correlation function of the scattered light.We will show that in the configuration of our experiment, the 2nd order correlation is close to that of a thermal sample, and very different from the correlation properties of the initial, condensed atomic state. We use helium in the 2 3 S 1 , m = 1 state confined in a crossed dipole trap (see Fig. 1a) with frequencies of 1300 Hz in the x and y directions and 130 Hz in the (vertical) z direction.The dipole trap wavelength is 1.5 µm.The atom number is approximately 50 000 and the temperature of the remaining thermal cloud 140 nK.A 9 G magnetic field along the y axis defines a quantization axis.After producing the condensate, we irradiate it with a laser pulse of 2.4 W/cm 2 tuned 600 MHz to the red of the 2 3 S 1 → 2 3 P 0 transition at λ = 1083 nm and with natural linewidth 1.6 MHz.The excitation beam propagates with an angle of 10 • relative to the x axis and its polarization is linear, with the same angle relative to the y axis (see Fig. 1a).The pulse length is 5 µs and it is applied with a delay τ after switching off the trap.The expansion of the cloud during this delay is a convenient way to vary both the optical density and the anisotropy of the sample at constant atom number.The absorption dipole matrix element is of the σ − form and thus one half of the laser intensity is coupled to the atomic transition corresponding to a Rabi frequency of 56 MHz.The excited atoms decay with equal branching ratios to the 3 ground states.During the pulse, less than 10 % of the atoms are pumped into each of these states.Because of the polarization selection rules, the atoms which are pumped into the m = 0 state cannot reabsorb light from the excitation laser.By focusing on these atoms, we study the regime of "Raman superradiance" [15,32], by which we mean that an absorption and emission cycle is accompanied by a change of the internal state of the atom.When the trap is switched off, the atoms fall toward a micro-channel plate detector which detects individual atoms with 3 dimensional imaging capability and a 10 to 20 % quantum efficiency [33].A magnetic field gradient is applied to sweep away all atoms except those scattered into the m = 0 magnetic sublevel.The average time of flight to the detector is 310 ms and is long enough that the atoms' positions at the detector reflect the atomic momenta after interaction with the excitation laser.Conservation of momentum then requires that these atoms lie on a sphere with a radius equal to the recoil momentum k rec = 2π/λ.Any additional scattering of light, whether from imperfect polarization of the excitation laser, or from multiple scattering by the atoms, will result in the atoms lying outside the sphere.We see no significant signal from such events, but in order to completely eliminate the possibility for multiple scattering we restrict our analysis of the data to the spherical shell with inner radius 0.8 k rec and outer radius 1.2 k rec . We excite atoms in an elongated BEC in such a way that an allowed emission dipole can radiate along the long axis.In an anisotropic source, collective emission builds up more efficiently in the directions of highest optical thickness.Superradiance is therefore expected to occur along the long axis of the BEC, in so called "endfire" modes [10,34].An important parameter then is the Fresnel number of the sample [2], F = 2R 2 ⊥ /λR z , where R ⊥ and R z are the horizontal and vertical Thomas-Fermi radii of the condensate.The Fresnel number distinguishes between the diffraction limited (F < 1) and multimode superradiance regimes (F > 1).In our case, R ⊥ ≈ 5 µm and R z ≈ 50 µm, yielding a Fresnel number of about unity. Typical cuts through the atomic momentum distribution in the yz plane are shown in Fig. 2, for τ = 500 µs (left panel) and τ ≈ 0 (right panel).In both cases, the spherical shell with radius 1 k rec appears clearly.For the short delay, when the atomic sample remains dense and anisotropic, we observe strong scattering in the endfire modes at the top and bottom poles of the sphere.In addition to this change in the profile of the distribution, we measure an increase of the total number of atoms on the sphere by a factor ∼ 5 from τ = 500 µs to τ ≈ 0. Because each atom has scattered a single photon, this increase directly reflects an increase of the rate of emission in the sample and therefore demonstrates the collective nature of the scattering process.At long delays, the condensate has expanded sufficiently that the optical thickness and anisotropy have fallen dramatically, suppressing the collective scattering.By looking at the number of scattered atoms in the x direction (perpendicular to the plane of Fig. 2), we have verified that, away from the endfire modes, the rate of emission varies by less than 10% for different delays [35]. To see the distribution in a more quantitative way, we show in Fig. 3 an angular plot of the atom distribution in the yz plane.Data is shown for three different delays τ before application of the excitation pulse.For 500 µs delay, the angular distribution follows the well-known "sin 2 θ" linear dipole emission pattern with the angles θ = 0 and π corresponding to the orientation of the dipole along the y-axis [35].For 200 µs delay, the superradiant peaks are already visible on top of the dipole emission profile.For the shortest delay, the half width of the superradiant peaks is 0.14 k rec , or 0.14 rad, consistent with the diffraction angle and the aspect ratio of the source.In the vertical direction, the superradiant peaks are 10 times narrower than in the horizontal direction [35]. In the strongly superradiant case, we observe large and uncorrelated fluctuations of the heights of the two superradiant peaks on a shot-to-shot basis.These fluctuations directly reflect the fluctuations of the population of the superradiant modes.We investigate these fluctuations further by measuring the normalized 2-particle correlation function of the scattered atoms, defined as Here, n is the atomic density and : : denotes normal ordering.In practice, this function is obtained from a histogram of pair separations ∆k normalized to the autoconvolution of the average particle momentum distribution [36,37].Figure 4 shows the experimentally measured correlation functions integrated over the momentum along two out of three axes, both for the superradiant peaks and on the scattering sphere away from the peaks [35].We see that in both cases the correlation function at zero separation reaches a value close to 2. This shows clearly that, despite strong amplified emission in the endfire modes, the atoms undergoing a superradiant process have statistics comparable to that of a thermal sample.As underlined in the introduction, these large fluctuations can be simply understood by modeling the superradiant emission as a four wave mixing process; they arise from the fact that the emission is triggered by spontaneous emission.For the superradiant peaks, the correlation actually is slightly larger than 2. Similar behavior has appeared in some models [20,38], but these models may not be directly applicable to our situation. Figure 4 also shows that the correlation widths of the superradiant modes are somewhat broader than those of the atoms scattered in other modes.The effect is a factor of about 1.5 in the vertical direction and about 1.25 in the horizontal direction [35].The broadening indicates that the effective source size for superradiance is slightly smaller than for spontaneous scattering.A decreased vertical source size for superradiance is consistent with the observations of Ref. [39,40] which showed that the superradiant emission is concentrated near the ends of the sample.In the horizontal direction, one also ex-FIG.4: (color online) Correlation functions along the z (a) and y axis (b) for τ ≈ 0. Blue circles correspond to the superradiant peaks (defined by |kz| > 0.95krec).Orange circles correspond to atoms from the scattering sphere away from the superradiant peaks (defined by |kz| < 0.92krec).Solid lines are Gaussian fits constrained to approach unity at large separation.Gray solid circles correspond to a fraction of the initial condensate transferred to the m = 0 state via a stimulated Raman transfer.The dashed gray line shows unity.Error bars denote the 68% confidence interval.pects a slightly reduced source size relative to the atom cloud since the gain is higher in the center, where the density is higher.The fact that the correlation widths are close to the widths of the momentum distribution [35] indicates that the superradiant peaks are almost single mode as expected for samples with a Fresnel number close to unity [2]. The spontaneous superradiant scattering process should be contrasted with stimulated Raman scattering.In terms of the model described by the Hamiltonian (1), stimulated Raman scattering corresponds to seeding one of the photon modes by a coherent state.In this case, vacuum fluctuations do not initiate the scattering process, and the resulting mode occupation is not thermal but coherent.To study stimulated scattering, we applied the excitation beam together with another beam polarized parallel to the magnetic field and detuned by the Zeeman shift (25 MHz) with respect to the σ-polarized beam, inducing a stimulated Raman transition.The laser intensities were adjusted to transfer a similar number of atoms to the m = 0 state as in the superradiance experiment.The normalized correlation functions in this situation, shown in Fig. 4, are very nearly flat and equal to unity as we expect for a BEC [36,41,42].The complementary experiment, seeding the atomic mode with a coherent state has also been observed to produce a coherent amplified matter wave [43,44].As a side remark, we have also observed that the superradiant atom peaks are 2.8 times narrower in the vertical direction than the vertical width of the transferred condensate [35].We attribute this to a longitudinal gain narrowing effect [27]. We also investigated the influence of several other experimental parameters on the 2nd order coherence of the superradiant emission: We have excited the atomic sample with a longer and stronger pulse (10 µs, 3.2 W/cm 2 ), so that the initial condensate was entirely depleted.We have explored the Rayleigh scattering regime, in which the atoms scatter back to their initial internal state.We also changed the longitudinal confinement frequency of the BEC to 7 Hz, leading to a much greater aspect ratio.These different configurations led to 2-particle correlation functions which were very similar to the one discussed above.We believe that similar fluctuations will occur in superradiance from a thermal cloud provided that the gain in the medium is large enough.We were unable to confirm this experimentally in our system, precisely because of the vastly reduced optical density.However, noncoherent intensity fluctuations have been observed already using magneto-optically trapped atoms [24].This seems to confirm our interpretation that the large fluctuations of the superradiant mode occupation is an intrinsic property of superradiant emission, reflecting the seeding by spontaneous emission.The only way to suppress these fluctuations would be to restrict the number of scattering modes to one by means of a cavity and to saturate the gain by completely depleting the atomic cloud.The occupation of the superradiant mode would then simply reflect that of the initial atomic sample. An interesting extension of the techniques used here is to examine superradiant Rayleigh scattering of a light pulse short enough and strong enough to populate oppositely directed modes [45].It has been predicted [13,14,46] that the modes propagating in opposite direction are entangled, similar to those produced in atomic four wave mixing [47][48][49].A similar measurement technique should be able to reveal them. I. SUPPLEMENTARY MATERIAL Distribution of atoms in the xz plane The distribution of scattered atoms in the yz plane showed a vanishing population along the direction of the emission dipole (angles 0 and π in the Fig. 3 of the main text).In the xz plane on the other hand, the angular distribution is, as expected, uniform between the superradiant peaks, see Fig. S1.The signal is zero on one side of each superradiant peak because the atomic cloud in the xz plane is off center with respect to the detector due to the recoil from the excitation laser and the part of the distribution corresponding to k x > 0.4 k rec misses the detector as shown in Fig. S2. Calculation of the correlation functions The quantity actually displayed in Fig. 4 of the main text is not the correlation function as defined in Eq. 1, but the one defined by Eq.S1: The volume Ω 1 is defined by the boundary conditions Integration in momentum space is performed over a specific volume Ω V for each of the three cases showed in Fig. 4: • scattered sphere away from the superradiant peaks: |k z | < 0.92 k rec and no constraint in the xy plane; • stimulated Raman transfer: Ω V is the volume centered on the cloud with a width along z of 0.1 k rec and no constraint in the xy plane.Widths of the superradiant peaks In order to obtain the widths of the superradiant peak, we first derive the contribution of ordinary spontaneous emission.Fig. S3 displays a close up of the superradiant peak around −π/2 in the yz plane.The data corresponding to a long delay before application of the excitation pulse (green circles, τ = 500 µs) are well described by a pure spontaneous emission profile sin 2 (θ), where θ is the polar angle in the yz plane (green curve).Since the contribution of the spontaneous emission should be the same for all delays, we subtract this background from the atomic signal before fitting the distribution with a Lorentzian function.The sum of the background and the fit is also displayed in Fig. S3 (blue and red curves).The choice of a Lorentzian fitting function is empirical and we expect the exact shape of the superradiant contribution to be more complex [38].From this fit we obtain half-widths at half-maximum of 0.14 and 0.25 rad for τ = 0 and 200 µs, respectively. Table I summarizes the various widths measured in this experiment.The first three lines refer to the widths of the observed atomic distribution in momentum space.The "BEC" entry corresponds to the configuration in which the m = 0 sublevel of the 2 3 S 1 state was populated by stimulated Raman transfer (see main text). FIG. 1 : FIG.1:(color online) (a) Sketch of the experiment.A 9-G magnetic field B applied along the y axis defines the quantization axis.The excitation beam propagates with an angle of 10 • (not shown) relative to the x axis and its polarization is linear, with the same angle relative to the y axis.After emission, the atoms fall 46 cm to a positionsensitive micro-channel plate (MCP).The atom cloud forms a sphere with enhanced occupation of the endfire modes.(b) Atomic level scheme.The atoms, initially in the 2 3 S1, m = +1 state, are excited to the 2 3 P0 state.From there, they can decay with equal branching ratios to the 3 sub-levels of the ground state.We detect only the atoms which scatter into the m = 0 state. FIG. 2 : FIG.2:(color online) Momentum distribution of scattered atoms in the yz plane (containing the emission dipole).Both figures show the distribution in the yz plane, integrated between kx = ±0.1 krec and summed over 100 shots.See the supplemental information for a cut in the xz plane[35].Left: Excitation laser applied 500 µs after the trap switchoff.Only the radiation pattern for a y-polarized dipole is visible.Right: Excitation laser applied immediately after the trap switchoff.Strong superradiance is visible in the vertical, endfire modes. FIG. 3 : FIG. 3: (color online) Angular distribution of scattered atoms in the yz plane (containing the emission dipole) for different values of the delay τ before the excitation pulse.The data for τ = 0 and 500 µs are the same as those shown in Fig. 2. The images were integrated along the x-axis between ±0.1 krec and only atoms lying inside a shell with inner radius 0.8 krec and outer radius 1.2 krec were taken into account.The delays τ = 0, 200 and 500 µs correspond to peak densities of ≈ 8, 2, 0.4 × 10 18 m −3 and to aspect ratios of 10, 5 and 2.5, respectively.The endfire modes are located at ±π/2.The half-width at half-maximum of the highest peak is 0.14 rad.Error bars are shown and denote the 68 % confidence interval. FIG. S1: (colour online) Angular distribution of scattered atoms in the plane perpendicular to the emission dipole for different values of the delay τ before the excitation pulse.The data shown are the same as those discussed in the main text. FIG. S2: (colour online) Momentum distribution of scattered atoms in the plane perpendicular to the emission dipole.Both figures show the atom distribution in the xz-plane, integrated between ky = ±0.1 krec and summed over 100 shots.Left: Excitation laser applied 500 µs after the trap has been switched off.Only the radiation pattern for a y-polarized dipole is visible.Right: Excitation laser applied immediately after the trap has been switched off.Strong superradiance is visible in the vertical, endfire modes. FIG.S3: (colour online) Close up of the momentum distribution of scattered atoms around one superradiant peak in the plane of the emission dipole (yz plane).The data shown are the same as those discussed in the main text.Plain lines are fits to the data (see text for details). TABLE I : Half-widths at half-maximum for density and correlation function in units of krec.The number in parenthesis denotes the uncertainty on the last digit.
2014-06-01T19:22:10.000Z
2013-12-24T00:00:00.000
{ "year": 2013, "sha1": "bce8b5fab3e87aa324e3e9eb992db5e6241d14fe", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1312.6772", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bce8b5fab3e87aa324e3e9eb992db5e6241d14fe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
46767447
pes2o/s2orc
v3-fos-license
High quality factor photonic crystal filter at k ≈ 0 and its application for refractive index sensing We report here the refractive index (RI) sensing using the singly degenerate high quality factor (Q) modes in photonic crystal slabs (PCS) with the free-space coupled incident beam close to normal incidence. Q values of 3.2x10 and 1.8x10 were achieved for the fabricated PCS in air and aqueous solution, respectively. A spectral sensitivity (S) of 94.5 nm/RIU and a detection limit (DL) of 3x10 RIU were achieved with our device. Such a high-Q cavity for the singly degenerate mode close to normal incidence is very promising to achieve a lower DL for RI sensing. © 2017 Optical Society of America OCIS codes: (050.5298) Photonic crystals; (350.4238) Nanophotonics and photonic crystals; (130.2790) Guided waves; (280.1415) Biological sensing and sensors. References and links 1. I. M. White and X. Fan, “On the performance quantification of resonant refractive index sensors,” Opt. Express 16(2), 1020–1028 (2008). 2. X. Fan, I. M. White, S. I. Shopova, H. Zhu, J. D. Suter, and Y. Sun, “Sensitive optical biosensors for unlabeled targets: A review,” Anal. Chim. Acta 620(1-2), 8–26 (2008). 3. F. Vollmer and L. Yang, “Label-free detection with high-Q microcavities: a review of biosensing mechanisms for integrated devices,” Nanophotonics 1(3-4), 267–291 (2012). 4. M. R. Lee and P. M. Fauchet, “Two-dimensional silicon photonic crystal based biosensing platform for protein detection,” Opt. Express 15(8), 4530–4535 (2007). 5. B. Cunningham, B. Lin, J. Qiu, P. Li, J. Pepper, and B. Hugh, “A plastic colorimetric resonant optical biosensor for multiparallel detection of label-free biochemical interactions,” Sens. Actuators B Chem. 85(3), 219–226 (2002). 6. I. D. Block, L. L. Chan, and B. T. Cunningham, “Photonic crystal optical biosensor incorporating structured lowindex porous dielectric,” Sens. Actuators B Chem. 120(1), 187–193 (2006). 7. Y. Guo, J. Y. Ye, C. Divin, B. Huang, T. P. Thomas, J. R. Baker, Jr., and T. B. Norris, “Real-time biomolecular binding detection using a sensitive photonic crystal biosensor,” Anal. Chem. 82(12), 5211–5218 (2010). 8. R. Magnusson, D. Wawro, S. Zimmerman, and Y. Ding, “Resonant photonic biosensors with polarization-based multiparametric discrimination in each channel,” Sensors (Basel) 11(2), 1476–1488 (2011). 9. L. L. Chan, S. L. Gosangari, K. L. Watkin, and B. T. Cunningham, “Label-free imaging of cancer cells using photonic crystal biosensors and application to cytotoxicity screening of a natural compound library,” Sens. Actuators B Chem. 132(2), 418–425 (2008). 10. Y. Sun and X. Fan, “Analysis of ring resonators for chemical vapor sensor development,” Opt. Express 16(14), 10254–10268 (2008). 11. Y. Sun, S. I. Shopova, G. Frye-Mason, and X. Fan, “Rapid chemical-vapor sensing using optofluidic ring resonators,” Opt. Lett. 33(8), 788–790 (2008). 12. W.-C. Lai, S. Chakravarty, X. Wang, C. Lin, and R. T. Chen, “On-chip methane sensing by near-IR absorption signatures in a photonic crystal slot waveguide,” Opt. Lett. 36(6), 984–986 (2011). 13. M. El Beheiry, V. Liu, S. Fan, and O. Levi, “Sensitivity enhancement in photonic crystal slab biosensors,” Opt. Express 18(22), 22702–22714 (2010). 14. C. Nicolaou, W. T. Lau, R. Gad, H. Akhavan, R. Schilling, and O. Levi, “Enhanced detection limit by dark mode perturbation in 2D photonic crystal slab refractive index sensors,” Opt. Express 21(25), 31698–31712 (2013). 15. S. Wang, Y. Liu, D. Zhao, Y. Shuai, H. Yang, W. Zhou, and Y. Sun, “Optofluidic double-layer Fano resonance photonic crystal slab liquid sensors,” in Conference on Lasers and Electro-Optics (CLEO) (Optical Society of America, 2015), paper STu1F.6. Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10536 #287586 https://doi.org/10.1364/OE.25.010536 Journal © 2017 Received 27 Feb 2017; revised 23 Apr 2017; accepted 24 Apr 2017; published 27 Apr 2017 16. S. Wang, Y. Liu, D. Zhao, H. Yang, W. Zhou, and Y. Sun, “Optofluidic Fano resonance photonic crystal refractometric sensors,” Appl. Phys. Lett. 110(9), 091105 (2017). 17. W.-C. Lai, S. Chakravarty, Y. Zou, Y. Guo, and R. T. Chen, “Slow light enhanced sensitivity of resonance modes in photonic crystal biosensors,” Appl. Phys. Lett. 102(4), 041111 (2013). 18. C. Kang, C. T. Phare, Y. A. Vlasov, S. Assefa, and S. M. Weiss, “Photonic crystal slab sensor with enhanced surface area,” Opt. Express 18(26), 27930–27937 (2010). 19. D. Dorfner, T. Zabel, T. Hürlimann, N. Hauke, L. Frandsen, U. Rant, G. Abstreiter, and J. Finley, “Photonic crystal nanostructures for optical biosensing applications,” Biosens. Bioelectron. 24(12), 3688–3692 (2009). 20. M. G. Scullion, A. Di Falco, and T. F. Krauss, “Slotted photonic crystal cavities with integrated microfluidics for biosensing applications,” Biosens. Bioelectron. 27(1), 101–105 (2011). 21. D. Yang, S. Kita, F. Liang, C. Wang, H. Tian, Y. Ji, M. Lončar, and Q. Quan, “High sensitivity and high Qfactor nanoslotted parallel quadrabeam photonic crystal cavity for real-time and label-free sensing,” Appl. Phys. Lett. 105(6), 063118 (2014). 22. S. Fan and J. D. Joannopoulos, “Analysis of guided resonances in photonic crystal slabs,” Phys. Rev. B 65(23), 235112 (2002). 23. S. G. Johnson, S. Fan, P. R. Villeneuve, J. D. Joannopoulos, and L. A. Kolodziejski, “Guided modes in photonic crystal slabs,” Phys. Rev. B 60(8), 5751–5758 (1999). 24. R. Magnusson and S. S. Wang, “New principle for optical filters,” Appl. Phys. Lett. 61(9), 1022–1024 (1992). 25. W. Zhou, D. Zhao, Y. Shuai, H. Yang, S. Chuwongin, A. Chadha, J.-H. Seo, K. X. Wang, V. Liu, Z. Ma, and S. Fan, “Progress in 2D photonic crystal Fano resonance photonics,” Prog. Quantum Electron. 38(1), 1–74 (2014). 26. Y. Shuai, D. Zhao, A. Singh Chadha, J.-H. Seo, H. Yang, S. Fan, Z. Ma, and W. Zhou, “Coupled double-layer Fano resonance photonic crystal filters with lattice-displacement,” Appl. Phys. Lett. 103(24), 241106 (2013). 27. Y. Shuai, D. Zhao, Z. Tian, J.-H. Seo, D. V. Plant, Z. Ma, S. Fan, and W. Zhou, “Double-layer Fano resonance photonic crystal filters,” Opt. Express 21(21), 24582–24589 (2013). 28. W. M. Robertson, G. Arjavalingam, R. D. Meade, K. D. Brommer, A. M. Rappe, and J. D. Joannopoulos, “Measurement of photonic band structure in a two-dimensional periodic dielectric array,” Phys. Rev. Lett. 68(13), 2023–2026 (1992). 29. K. Sakoda, “Symmetry, degeneracy, and uncoupled modes in two-dimensional photonic lattices,” Phys. Rev. B Condens. Matter 52(11), 7982–7986 (1995). 30. O. Kilic, M. Digonnet, G. Kino, and O. Solgaard, “Controlling uncoupled resonances in photonic crystals through breaking the mirror symmetry,” Opt. Express 16(17), 13090–13103 (2008). 31. J. Lee, B. Zhen, S.-L. Chua, W. Qiu, J. D. Joannopoulos, M. Soljačić, and O. Shapira, “Observation and differentiation of unique high-Q optical resonances near zero wave vector in macroscopic photonic crystal slabs,” Phys. Rev. Lett. 109(6), 067401 (2012). 32. V. Liu, M. Povinelli, and S. Fan, “Resonance-enhanced optical forces between coupled photonic crystal slabs,” Opt. Express 17(24), 21897–21909 (2009). 33. V. Liu and S. Fan, “S4: A free electromagnetic solver for layered periodic structures,” Comput. Phys. Commun. 183(10), 2233–2244 (2012). 34. Z. Qiang, H. Yang, L. Chen, H. Pang, Z. Ma, and W. Zhou, “Fano filters based on transferred silicon nanomembranes on plastic substrates,” Appl. Phys. Lett. 93(6), 061106 (2008). 35. G. M. Hale and M. R. Querry, “Optical constants of water in the 200-nm to 200-μm wavelength region,” Appl. Opt. 12(3), 555–563 (1973). 36. V. Lousse, W. Suh, O. Kilic, S. Kim, O. Solgaard, and S. Fan, “Angular and polarization properties of a photonic crystal slab mirror,” Opt. Express 12(8), 1575–1582 (2004). 37. B. Luk’yanchuk, N. I. Zheludev, S. A. Maier, N. J. Halas, P. Nordlander, H. Giessen, and C. T. Chong, “The Fano resonance in plasmonic nanostructures and metamaterials,” Nat. Mater. 9(9), 707–715 (2010). 38. A. F. Oskooi, D. Roundy, M. Ibanescu, P. Bermel, J. D. Joannopoulos, and S. G. Johnson, “Meep: A flexible free-software package for electromagnetic simulations by the FDTD method,” Comput. Phys. Commun. 181(3), 687–702 (2010). 39. K. Sakoda, Optical Properties of Photonic Crystals (Springer, 2005). 40. Z. Yu and S. Fan, “Extraordinarily high spectral sensitivity in refractive index sensors using multiple optical modes,” Opt. Express 19(11), 10029–10040 (2011). 41. K. A. Tetz, L. Pang, and Y. Fainman, “High-resolution surface plasmon resonance sensor based on linewidthoptimized nanohole array transmittance,” Opt. Lett. 31(10), 1528–1530 (2006). 42. Y. Nazirizadeh, U. Lemmer, and M. Gerken, “Experimental quality factor determination of guided-mode resonances in photonic crystal slabs,” Appl. Phys. Lett. 93(26), 261110 (2008). 43. N. Huang, L. J. Martínez, and M. L. Povinelli, “Tuning the transmission lineshape of a photonic crystal slab guided-resonance mode by polarization control,” Opt. Express 21(18), 20675–20682 (2013). 44. J.-N. Liu, M. V. Schulmerich, R. Bhargava, and B. T. Cunningham, “Optimally designed narrowband guidedmode resonance reflectance filters for mid-infrared spectroscopy,” Opt. Express 19(24), 24182–24197 (2011). 45. I. Alvarado-Rodriguez and E. Yablonovitch, “Separation of radiation and absorption losses in two-dimensional photonic crystal single defect cavities,” J. Appl. Phys. 92(11), 6399–6402 (2002). 46. T. Xu, M. S. Wheeler, H. E. Ruda, M. Mojahedi, and J. S. Aitchison, “The influence of material absorption on the quality factor of photonic crystal cavities,” Opt. Express 17(10), 8343–8348 (2009). Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10537 Introduction Optical biosensors have wide applications in biomedical research, healthcare, and environmental monitoring and are immune to electromagnetic interference.Optical biosensing technologies can be divided into two categories: fluorescence-based detection and label-free detection.Label-free optical sensing can provide real-time rapid analysis and requires minimal sample preparation without interfering with the function of the biomolecules [1][2][3].Label-free sensors could either detect refractive index (RI) change, absorption change or Raman signal induced by the presence of the analyte [2].RI sensors have been demonstrated in the detection of biochemical molecular interaction [4][5][6][7][8], drug screening [9], and gases [10][11][12].RI sensors can detect the RI change of the bulk solution [13][14][15][16], polymer RI and thickness change in gas sensing [10,11], or RI change due to molecule binding near the sensor surface in aqueous solution [4,7]. Sensor detection limit (DL) can fully describe the performance of a sensor, which can be improved by increasing the quality factor (Q) or spectral sensitivity (S) [1].Surface plasmon resonance (SPR) sensors suffer from low Q due to strong intrinsic absorption in metal, which increases the ambiguity in determining the spectral resonance location in the presence of the spectral noise [1].Photonic crystal (PC) devices, including localized PC cavities [4,[17][18][19], slotted PC cavities [20,21], and defect-free photonic crystal slabs (PCS) [5,8,14,16], are one of the most promising platforms in building ultra-compact and highly sensitive integrated sensor arrays on-chip due to the enhanced light-matter interaction at the sub-micrometer scale.Localized PC cavities can achieve high Q values ~10 4 [4,[17][18][19], but have moderate S ~100 nm/RIU due to small optical field overlap with the analyte.Slotted PC cavities can obtain high S of 450 to 500 nm/RIU due to larger overlap between the optical mode and the analyte, while the Q (typically 10 3 to 10 4 ) for these designs is highly sensitive to the fabrication imperfections [20,21].For both localized and slotted PC cavities, delicate alignment is needed to couple the light from fiber to in-plane waveguide.Additionally, the detection speed is greatly comprised due to the slow mass transport rate for the analytes to diffuse in and out of the confined nanoscale channels or localized cavities.In a previous work [16], we reported a surface-normal free-space coupled optofluidic sensor based on defect-free 2D PCS with Q of 2,800 and S of 264 nm/RIU and achieved fast detection and simple alignment for light coupling. Guided resonances in PCS allow easy coupling to externally vertical incident beams under certain conditions [22][23][24][25], resulting in interferences in the reflection or transmission spectra of the PCS.The resonant feature could be very sharp, which can be utilized for optical filters [26,27].Dark mode, or termed as uncoupled mode, could not be observed in the reflection or transmission spectra due to symmetry mismatch to the incident plane wave radiation [28][29][30].These uncoupled modes have infinite lifetime with infinite Q factor, and they are nondegenerate in a square lattice PCS [30].Coupling to these perturbed dark modes is associated with symmetry breaking, either by varying the lattice structure [14,30] or introducing finite incident angle [22,31].Quality factor of 10 4 and S of 800 nm/RIU were achieved by coupling light to slightly perturbed dark modes through alternating nanohole sizes [14].However, the PCS should be suspended, which is more challenging to fabricate and more fragile at the thin membrane region, compared to the non-suspended counterpart.The non-degenerate (or singly degenerate) modes at small incident angle (k ≈0) have higher Q compared to the doubly degenerate modes [22,31,32], which could be employed to achieve a lower detection limit in the RI sensor. In this work, we demonstrated, theoretically and experimentally, RI sensing using the high Q singly degenerate modes off normal incidence.A DL of 3x10 −5 RIU has been achieved.These singly degenerate modes are uncoupled modes at normal incidence, but can be observed in the optical spectrum and used for sensing when the mirror symmetry is broken due to finite k vector off normal incidence. Design The schematic of the device on silicon on insulator (SOI) substrate is shown in Fig. 1(a), in which top Si layer thickness is 246 nm and SiO 2 thickness is 3 μm.The Si PCS has a lattice constant a = 970 nm and a radius r = 83 nm.An incident plane wave beam with wave vector k shines on the PCS in x-z plane at an incident angle θ and gets reflected back.Figure 1(b) shows the top view and cross-sectional view of scanning electron microscope (SEM) images of one fabricated device. Reflection spectra of the PCS under normal incidence (along the z-direction) or at a small incident angle are simulated with Stanford Stratified Structure Solver (S 4 ), a frequency domain code based on coupled wave analysis and S-matrix algorithm [33].The simulation on angle dependent dispersion properties is similar to the one reported earlier [34].We simulate one unit cell of the PCS immersed in water.The RI of water is taken to be 1.319 at the PCS resonance modes [35].The reflection spectrum of the PCS is polarization independent at normal incidence because of 90-degree rotational symmetry of the square lattice [36].Polarization of the incident optical field is defined with respect to the plane of incidence.Transverse electric (TE) polarization has electric field E y perpendicular to the plane of incidence (x-z plane), and transverse magnetic (TM) polarization has magnetic field H y perpendicular to the plane of incidence.At normal incidence (k = 0, θ = 0°), there exists one doubly degenerate mode at 1530 nm, as shown in black dashed line in Fig. 2(a).At 0.5° incident angle, a new mode A emerges at 1515 nm with TM polarized light source excitation, as shown in Fig. 2(a).The newly emerged non-degenerate mode A is an uncoupled mode at Γ point (k = 0), which has infinite life time and is completely decoupled from the external world [31].When k is slightly greater than zero, this unique guided resonance has finite lifetime and could couple light in and out of the slab efficiently.At 0.5° incident angle, the singly degenerate mode D could be excited with TE polarized light source.Figure 2(c) depicts the calculated radiative quality factor (Q rad ) of these four bands with Fano fitting from simulated reflection spectrum [37].Bands B and C originate from the doubly degenerate mode (at Γ point) and they have finite Q rad at k ≈0, the singly degenerate bands A and D have Q rad that approaches infinity when k ≈0.At Γ point, the doubly degenerate mode has the same symmetry as free-space modes, while the singly degenerate modes A and D are decoupled from free-space modes due to symmetry mismatch. In this work, we compared two computational techniques to evaluate the sensor performance of the PCS, including resonance location, sensitivity, quality factor and detection limit.The first technique is S 4 , where we simulated spectral sensitivity (∆λ/∆n analyte ) by tracking guided resonance shift for a unit cell of PCS with different RI of analyte above the PCS.The second technique is MEEP, a freely available finite-difference time-domain (FDTD) implementation [38].We simulated a unit cell of three-dimensional PCS structures with periodic boundary conditions (PBCs) in plane and perfectly matched layers (PMLs) on top and bottom of the unit cell to absorb outgoing fields.The excitation consists of a broadband planar Gaussian source located above the PCS to solve modes of the PCS.Field distribution for each mode is computed by exciting the PCS with a narrowband planar Gaussian source.In MEEP, resonance frequency and radiative quality factor of each localized mode is calculated with Harminv, a program that decomposes the fields into a sum of sinusoids and determines their frequencies and decay constants.shows the E x component in y-z plane for the four modes excited at 0.5 degree incident angle.These modes have odd symmetry in their electric field with respect to the mirror plane perpendicular to the z axis [13,23].The field is greatly extended to the medium above the PCS, which is water in our case.At Γ point, square lattice has a symmetry group of C 4υ with five irreducible representations.When moving away from Γ point to X point, the symmetry group changes to C 1h with only two irreducible representations [39].The symmetry of each mode could be determined by the mode profile of its E z component x-y plane.As shown in Figs.3(f) and 3(h), Mode B and D are antisymmetric around the x axis, which can be excited by TE polarized source.Figures 3(e) and 3(g) show that mode A and C are symmetric around the x axis and hence can be excited by TM polarized source. The sensing mechanism in our PCS sensors is the spectral shift of guided resonances due to changes in the RI of the analyte near the PCS top surface.The relation between small change in RI of the surrounding medium and the shift in guided resonance frequency is linear to a first-order approximation based on perturbation theory [40].We can define optical overlap integral f as the ratio of electric field energy in the analyte region (liquid in our case) to the total energy for a given mode [13,14]. where ε is the dielectric constant of the material.The bulk spectral sensitivity S is related to the optical overlap integral f and the resonance wavelength λ 0 by: Therefore it is clear that a large f value will result in a higher bulk sensitivity because of larger field overlap with the analyte.The detection limit describes the smallest RI change that could be measured with a RI sensor [1,13,14], which is defined as: where r is the sensor resolution, and r can be taken as three standard deviations (3σ) of the system noise.And σ is related to the signal to noise ratio (SNR) and Q by [1]: We also examined the field energy distribution ε|E| 2 for B mode with 0.5 degree incident angle.Figure 4(a) shows the ε|E| 2 distribution in the x-y plane at the center of the PCS (z = −120 nm), with most of the energy confined in the high index silicon region.The ε|E| 2 distribution in the y-z plane at the center of the air hole is plotted in Fig. 4(b).To evaluate how much field energy is concentrated in the liquid region, we plot ε|E| 2 along z axis for the slice of x = 0, as shown with blue solid line in Fig. 4(c).In MEEP, we simulated one unit cell with space resolution of 10 nm, which is sufficient for our structure.We integrated ε|E| 2 along z axis for all the slices at every 10nm from x = -485 nm to x = 485 nm.The integrated ε|E| 2 is plotted with red solid line in Fig. 4(c).The dashed lines indicate the boundary of PCS, with the PCS-liquid boundary located at z = 0 nm.The optical overlap integral f in the liquid region above the PCS is calculated to be 10.6% by integrating ε|E| 2 for z > 0, according to Eq. ( 1).The f in the air hole of the PCS is around 0.3%, which does not contribute much to the bulk sensitivity.The other three modes A, C and D are also calculated in this way.A summary of the mode properties and sensor performance for the four modes at 0.5 degree incident angle is shown in Table 1.The integrated ε|E| 2 and sensitivity are almost the same for the four modes.For calculation of sensitivity in S 4 , we simulated the reflection spectrum for n analyte = 1.319 and n analyte = 1.329.SNR of 60dB is taken to calculate DL.Singly degenerate modes A and D have one order of magnitude higher Q with similar S compared with modes B and C, thereby achieving a lower DL. Fabrication and characterization The PC device was fabricated on SOI substrate with electron beam lithography (EBL) and plasma dry-etching process, with top view SEM and cross sectional view SEM shown in Fig. 1(b).Polydimethylsiloxane (PDMS) microfluidic channel fabricated with soft lithography was bonded on the device after a short oxygen plasma treatment to form the fluidic chamber for aqueous analyte delivery.The schematic of the PCS based optofluidic RI sensor is shown in Fig. 5(a).PDMS thickness is around 2 mm and has negligible optical absorption at PCS resonance wavelength region.Incident light beam passes through the PDMS chamber and shines on the PCS sensor.The test setup is shown in Fig. 5(b).A tunable laser light source (1465~1572 nm, Agilent 81980A) is used and the beam is collimated, polarized and focused onto the 500 x 500 μm 2 PCS device.The reflected light is crossed-polarized in 90 degrees with respect to the incident beam and is collected by a detector.The PCS device is mounted on a translation (x-y-z) stage and a rotation stage for aligning the laser beam to the PCS under normal incidence.The crosspolarization measurement technique suppresses the incident background light, revealing the guided resonance in the reflection spectrum with high extinction ratio (ER) [41,42]. The black curve in Fig. 6(a) shows the tested reflection spectrum of the PCS in air under normal incidence without the use of the cross-polarization technique (i.e.polarizer P2 is removed during the test).Simulated reflection spectrum of the PCS in air is shown as blue dashed line, which matches well with the tested spectrum.When the PCS is immersed in water, the simulated reflection spectrum, shown as red dashed line, indicates around 30 nm red shift compared to the spectrum in air.The Fano resonance is asymmetric lineshape in the reflection spectrum [25].The quality factor of the measured reflection spectrum is fitted to be 2,690, as shown in Fig. 6(b).The tested reflection spectra of the PCS with cross-polarization technique are shown in Fig. 7(a).The resonances are in Lorentzian lineshape because the Fabry-Perot background light gets suppressed while the guided resonance is revealed [43].In the test, we had a 10x objective lens to focus the collimated beam with 2 mm waist to 150 μm at the focal plane and the half-angle of divergence for the beam is estimated to be 0.35°.The sample was mounted at the focal plane of the lens, the incoming incident beam includes wave vectors with k > 0. Care was taken to align the beam to normal incidence.Two modes B and C are close to each other on the spectrum when k ≈0 and there is no clear distinction between these two modes.Measured quality factors (Q M ) for the spectrum in air and in water are fitted by Lorentzian fitting [44].Q M is 3.2x10 4 for the resonance of the PCS tested in air and the Lorentzian fitting is shown in Fig. 7(b).When testing the PCS immersed in water, the resonance spectral linewidth becomes broadened, exhibiting a Q M = 1.8x10 4 , as shown in Fig. 7(c).The extinction ratio is more than 10 dB for these resonances.The measured Q M can be related to Q rad with the following equation [45]. where Q loss depends on material absorption and scattering loss due to fabrication imperfection.The reduction of Q M from 3.2x10 4 to 1.8x10 4 is due to the absorption of water at this wavelength region [46].The absorption induced quality factor Q abs is given by [46]: where n i and n r are the imaginary part and real part for the absorptive material, f is the optical overlap integral in the absorptive material.We take n i as 10 −4 and n r as 1.319 [35].Optical overlap integral f is 10.8% for our structure, as shown in Table 1.Q abs is calculated to be around 6.1x10 4 , which is the upper bound for the quality factor of PCS sensor in water.We further characterized the sensing performance of D mode with various concentrations of ethanol/deionized (DI) water mixture (0.05% -0.5% volume ratio).The respective spectra are shown in Fig. 8(a).We performed water rinsing step before injecting new ethanol/DI water mixture to ensure that the PDMS chamber was free of residual ethanol from the previous test run.The resonance peak spectral position at each concentration is found by Lorentzian fitting and the spectral shift is plotted out in Fig. 8(b) with lowest detectable RI change of 3x10 −5 RIU achieved using 0.05% ethanol/DI water.The bulk sensitivity for mode D is fitted to be 94.5 nm/ RIU, which matches well with the simulated sensitivity of 100 nm/ RIU for this mode. Conclusion We have presented the theoretical background and practical technique to couple light to the singly degenerate modes at k ≈0 which are unobservable under normal incidence due to symmetry mismatch.We discussed the difference between singly degenerate modes and doubly degenerated modes.The high Q values for the singly degenerate modes can lower the detection limit of 2D PCS based RI sensor substantially.High Q value of 10 6 could be achieved theoretically, and high Q of 3.2 x 10 4 has been achieved in the experiment, which is limited by the angular spreading of the incident beam and fabrication imperfection of the PCS.A DL of 3x10 −5 RIU was achieved with 94.5 nm/RIU spectral sensitivity. Fig. 1 . Fig. 1.(a) Schematic of Fano (guided) resonance PCS on SOI substrate.(b) Device SEM top view with zoom in of four holes and angled cross-sectional view of three air holes shown in the inset. Figure 2 ( b) shows the dispersion curves of the four energy bands along the Γ-X direction defined in Fig.1(a) under small incident angles (θ).The doubly degenerate mode at Γ point splits into two modes B and C when incident angle is not zero.The two singly degenerate modes A and D do not continue to k = 0 in Fig.2(b) because they are uncoupled modes at k = 0. Fig. 2 . Fig. 2. (a) Simulated reflection spectra for PCS at 0.5 degree and 0 degree incident angles.(b) Simulated band diagram of singly degenerate modes (A, D) and doubly degenerate modes (B, C) for the PCS.(c) Fano fitted quality factor for reflection spectra at different incident angles for the four modes. Fig. 3 . Fig. 3. (a-d) E x field distribution in y-z plane at the center of the hole, (e-h) E z field distribution in x-y plane at the center of PCS for the four modes (a, e) mode A, (b, f) mode B, (c, g) mode C, (d, h) mode D, with air hole boundary shown in dashed line and the Si region boundary shown in solid line. Figures 3 ( Figures 3(a)-3(d) shows the E x component in y-z plane for the four modes excited at 0.5 degree incident angle.These modes have odd symmetry in their electric field with respect to the mirror plane perpendicular to the z axis[13,23].The field is greatly extended to the medium above the PCS, which is water in our case.At Γ point, square lattice has a symmetry group of C 4υ with five irreducible representations.When moving away from Γ point to X point, the symmetry group changes to C 1h with only two irreducible representations[39].The symmetry of each mode could be determined by the mode profile of its E z component x-y plane.As shown in Figs.3(f) and 3(h), Mode B and D are antisymmetric around the x axis, which can be excited by TE polarized source.Figures3(e) and 3(g) show that mode A and C are symmetric around the x axis and hence can be excited by TM polarized source.The sensing mechanism in our PCS sensors is the spectral shift of guided resonances due to changes in the RI of the analyte near the PCS top surface.The relation between small change in RI of the surrounding medium and the shift in guided resonance frequency is linear to a first-order approximation based on perturbation theory[40].We can define optical overlap integral f as the ratio of electric field energy in the analyte region (liquid in our case) to the total energy for a given mode[13,14]. Fig. 4 . Fig. 4. Simulation of B mode at 0.5 degree incident angle.(a) ε|E| 2 profile in x-y plane at the center of PCS (z = −120nm) with air hole boundary shown in dashed line.(b) ε|E| 2 profile in yz plane at the center of air hole (x = 0), solid lines show the boundary of Si region.(b) Distribution of ε|E| 2 along vertical (z-axis) direction for x = 0 and integrated ε|E| 2 from −485 nm < x < 485 nm, with dashed lines showing the Si slab boundary. Fig. 6 . Fig. 6.(a) Simulated reflection spectra of PCS at normal incidence in air and in water and measured reflection spectrum for PCS in air without polarizer.(b) Measured reflection spectrum in air and Fano fitting shows Q = 2,690. Fig. 7 . Fig. 7. Measured PCS reflection spectra with cross polarizers in air and in water.(b, c) Lorentzian fit of the measured reflection resonance for D mode (b) in air and (c) in water. Fig. 8 . Fig. 8. (a) Measured reflection spectra of the PCS in water and in different concentration of ethanol/DI water mixture, with the spectrum in water and 0.05% ethanol concentration shown in the inset.(b) Bulk sensitivity of the PCS is linear fitted to be 94.5 nm/RIU.
2018-04-03T01:07:17.656Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "6f98fd57d9be262230ebd8e3605b66f5adf8da5e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.25.010536", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6f98fd57d9be262230ebd8e3605b66f5adf8da5e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
16643037
pes2o/s2orc
v3-fos-license
Quantitative analysis on electric dipole energy in Rashba band splitting We report on quantitative comparison between the electric dipole energy and the Rashba band splitting in model systems of Bi and Sb triangular monolayers under a perpendicular electric field. We used both first-principles and tight binding calculations on p-orbitals with spin-orbit coupling. First-principles calculation shows Rashba band splitting in both systems. It also shows asymmetric charge distributions in the Rashba split bands which are induced by the orbital angular momentum. We calculated the electric dipole energies from coupling of the asymmetric charge distribution and external electric field, and compared it to the Rashba splitting. Remarkably, the total split energy is found to come mostly from the difference in the electric dipole energy for both Bi and Sb systems. A perturbative approach for long wave length limit starting from tight binding calculation also supports that the Rashba band splitting originates mostly from the electric dipole energy difference in the strong atomic spin-orbit coupling regime. Rashba band splitting in a system with an inversion symmetry breaking (ISB) such as material surfaces and hetero-structures recently has drawn much attention in condensed matter physics community [1][2][3][4][5] . In addition to its fundamental importance, it is believed that it plays a vital role in spin-orbit torque in spintronic materials 3 . It is thus important to thoroughly understand the origin of the Rashba band splitting. Tight binding and first-principles calculations reproduce very well the experimentally observed Rashba band splittings on material surfaces [6][7][8] . Especially, tight binding calculations show that atomic spin-orbit coupling (SOC) is a crucial parameter in the Rashba band splitting 6 . However, it had been unclear which interaction among many is crucial for the Rashba band splitting. It was recently found that the atomic orbital angular momentum (OAM) exists in the presence of the ISB and that it induces an asymmetric charge distribution for non-zero crystal momentum 9 . The dipole energy from the interaction between asymmetric charge distribution and the electric field from the ISB is proposed to be responsible for the Rashba band splitting in the strong SOC case. The proposed effective Hamiltonian for the Rashba band splitting is H = αL · S − α k (k × L) · E s , where α, L, S, α K , k, and E s are SOC parameter, OAM, spin angular momentum (SAM), coupling parameter for electric dipole interaction, crystal momentum and electric field induced by the ISB, respectively 10,11 . The first term in the Hamiltonian is the usual atomic SOC while the second term represents the newly proposed electric dipole energy. It has been previously conjectured that the atomic SOC determines the Rashba band splitting in weak atomic SOC regime while the new dipole energy term does so in strong atomic SOC regime 11,12 . In both cases, the existence of the OAM is essential. The existence of the OAM in Rashba bands was shown by tight binding and first-principles calculations 9,13 . It was also experimentally supported in the observation of strong circular dichroism in angle-resolved photoemission (ARPES) 10,11,14,15 . While the existence of the OAM is certain, its role in the Rashba band splitting does not appear to be fully accepted. Part of the reason could be from the fact that quantitative estimation of the OAM contribution based on, for example, first-principles calculation has not been performed. Therefore, a quantitative comparison between the electric dipole energy difference and the Rashba band splitting in neighboring bands is desired, especially in the strong atomic SOC case. Significance in the electric dipole energy difference compared to the total Rashba band splitting energy should prove the validity of the OAM based effective Rashba model. To show the importance of the electric dipole energy in a quantitative way, we performed first-principles and tight binding calculations on Bi and Sb monolayers with an external field. Use of monolayers with an external field is to mimic the surface state without dealing with complicated bulk states. This is a simple enough model for the purpose of our research to explore the origin of the Rashba band splitting of the surface states. Both first-principles and tight binding calculations are complementary to each other. The electric dipole energy is estimated by using the wave functions from first-principles calculation, which is not model-dependent, while tight binding calculations allow us to have more intuitive analysis on the band structures and wave functions. The outcome from the interaction between the asymmetric charge distribution and the electric field is responsible for the Rashba band splitting in Bi and Sb monolayers. Results and Discussions First-principle calculations. We present in Fig. 1 the band structures of Bi and Sb single layers under an electric field 0.5 V/Å along the direction perpendicular to the layers. The six bands are composed of p-orbitals of Bi or Sb atoms. Bands 1 and 2 are mainly of J ≈ 1/2 character while bands 3 to 6 come from mainly J ≈ 3/2 states. We can see the Rashba splitting in the band structure near the Γ and M points which are time reversal invariant momenta (TRIM) of the triangular lattice. Bi single layer whose atomic SOC is stronger than that of Sb shows a larger Rashba splitting in its band structure, which indicates that the magnitude of the Rashba splitting is clearly related to the atomic SOC strength. Indeed, the Rashba splitting observed at Cu, Ag, and Au surfaces also show such correlation between the atomic SOC and the size of the Rashba splitting 16 . In Fig. 2, we plot the expectation values of in-plane components of SAM and OAM of the Bi single layer near the Γ point. The SAM and OAM for Sb single layer (not shown here) have the same trends as those of Bi single layer. The only difference is the smaller OAM magnitude for Sb compared with Bi, which might be the result of the small SOC in Sb. All the OAM patterns of the bands show chiral structures around the Γ point and the chiral directions of adjacent bands are opposite to each other. SAM also has similar patterns to those of OAM because of the strong SOC. SAM for J ≈ 1/2 bands (bands 1 and 2) are antiparallel to OAM while they are almost parallel to the OAM directions in J ≈ 3/2 bands (bands 3 to 6). We notice that while the OAM magnitude is the largest in the bands 1 and 2, the largest band splitting exists in bands 3 and 4. Therefore, the magnitude of OAM cannot be directly linked to the size of the Rashba band splitting. Instead, asymmetric charge distribution, which results from interference of adjacent atomic orbitals with OAM, is found to be proportional to the size of the Rashba band splitting. Therefore, the overlap between adjacent atomic orbitals is also an important factor in the determination of the asymmetric charge distribution, hence the Rashba band splitting. We believe that the electric dipole moments of the states in bands 3 and 4 become stronger than those in bands 1 and 2 due to larger overlap between adjacent atomic orbitals. We will discuss this issue in more detail below. We plot the charge densities around a Bi atom for crystal momenta k = 0.2π /a and 0.4π /a along the Γ -M direction in Fig. 3. Bands 3 and 4 show clear charge asymmetry along the z-direction. Other bands show rather small asymmetry. So, it can be noticed that the magnitude of band splitting is closely related to the electric dipole moment of a state in the band. Bands 1 and 2 have a shape of p z orbital and therefore small orbital overlap between nearest atoms, resulting in small charge asymmetry, which reveals the importance of the orbital overlap for the formation of the electric dipole. Because of this small electric dipole moment, bands 1 and 2 have small Rashba splitting. Bands 3 and 4 have more charge along the in-plane direction (more overlap between neighboring orbitals) as well as relatively large in-plane components of the OAM. Therefore they have much larger charge asymmetry along the z-direction and band splitting. The top bands (5 and 6) have small charge asymmetry due to the smallest in-plane component of the OAM. When the crystal momentum k becomes twice larger (k = 0.4π /a in Fig. 3 (b)), the charge asymmetry becomes more significant. This result is consistent with the Rashba splitting being proportional to the k value as shown in Fig. 1. The same trend is observed in Sb single layer. The only difference is that the size of the dipole moment and band splitting is smaller because of the smaller SOC. To ensure the origin of Rashba band splitting, we compare electric dipole energy difference (dots) Δ P · E ext with the Rashba band splitting (solid lines) for both Bi and Sb layers as varying the crystal momentum k from Γ to M point as shown in Fig. 4. Overall, the electric dipole energy difference and the Rashba band splitting are well consistent in all range of crystal momentum for both systems. Small discrepancy can be attributed into two factors. One is the atomic SOC, α L · S. The other is the electric-field screening effect due to charge redistribution in Bi or Sb layers by the external electric field. Mostly, the electric dipole energy difference is larger than the Rashba band splitting in Bi layer. It is mainly because of the screening effect. We confirm that the Rashba band splitting mostly originates from the electric dipole energy difference in both Bi and Sb layers, which are considered as in the strong SOC regime. In the next section, we investigated more on the role of the dipole moment in the Rashba splitting and its correlation to SOC by using tight binding approach. where λ = x, y, z is the orbital index and σ = ↑ ,↓ is the spin index. The matrix elements h 0 λλ′ are given by xy yx y x 0 0 1 2 whose long wavelength limits are expressed by We suppose that the inversion symmetry with respect to the xy-plane is broken so that With these, one can obtain the OAM of each band as follows. The directions of the OAM of the two lowest bands are opposite to each other when this perturbation scheme is valid. The results on the electronic structures from our tight binding approximation are presented in Fig. 5. One can note that the tight binding bands and their OAM structures show good agreement with the first-principles results in the previous section. Also, we check the analytic results in the long wavelength limit works very well as indicated by red dotted lines in Fig. 5. Dipole energy. In this section, we prove that the band splitting is closely related to the dipole energy difference. Here, we regard h ISB (k) as a perturbation to explain the splitting of two degenerate multi-orbital bands by the ISB. This is a more general consideration compared with the previous section and its results hold throughout whole Brillouin zone if γ is much smaller than the spin-orbit interaction. Let us denote the Bloch states on two split bands as ψ (±) whose spatial representations are given by where λ = x, y, z is the orbital index, φ λ is the real space wave function of p λ -orbital and η σ is the spinor. Then the dipole energy Scientific RepoRts | 5:13488 | DOi: 10.1038/srep13488 Since this is exactly same with the energy splitting Δ E = ε + − ε − between two bands in equation (15), this proves the origin of the band splitting near Γ point is the formation of the dipole momentum by the ISB terms. The dipole energy splitting is drawn in Fig. 6 as a function of the SOC strength. We note that Δ E d saturate to a finite value Δ E d (α → ∞) = 2γk in the strong SOC limit while it depends linearly on the SOC in the weak SOC region. Although we only considered the lowest bands, we arrive at the same conclusion in the same way for other ones. In summary, first-principles and tight binding calculations on Bi and Sb triangular monolayers under electric field have been done as model systems. It is shown that the different asymmetric charge distributions are induced for each band given by Rashba splitting, and their dipole energies are quantitatively calculated under external electric field. The electric dipole energy difference and the split energy by Rashba splitting give remarkable agreement. Also, the tight binding calculation supports that the Rashba splitting originates from the electric dipole energy difference in the strong atomic spin-orbit coupling regime. We show that the electric dipole energy is mainly responsible for the Rashba band splitting. Methods To investigate the surface states of Bi and Sb without bulk state contributions, we considered Bi and Sb triangular monolayers. In order to break the inversion symmetry, we applied an electric field along the direction perpendicular to the layer. For the noncollinear density functional theory (DFT) calculation, we used projector augmented-wave (PAW) method as implemented in the Vienna ab initio simulation package (VASP) 18,19 . The generalized gradient approximation (GGA) of Perdew-Burke-Ernzerhof (PBE) 20 was used as exchange correlation functional and the spin-orbit interaction was included. The plane-wave cut off is 450 eV and the convergence criterion for energy is 10 −6 eV difference between two sequential steps. First, the volume and internal atomic positions of bulk Bi and Sb were optimized till the internal atomic force becomes less than 10 −4 eV/Å. Using this converged cell parameters of a Bi = 4.641 Å and a Sb = 4.388 Å, we constructed Bi and Sb triangular monolayers under an electric field. We then calculated their band structures, and extracted wave function characteristics as well as charge density at a given crystal momentum k and band index n. The wave function characteristics which were calculated by projecting the orbitals onto spherical harmonics of the atom were used to calculate the expectation values of the OAM at a specific k point. From the charge density around a Bi or Sb atom, we calculated the electric dipole moment of the orbital as where r is the coordinate vector centered at the atom and ρ(r) is the charge density at r.
2016-05-12T22:15:10.714Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "3ef80826dc104b88bd25315e1130c7edf4d95d5e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep13488.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ef80826dc104b88bd25315e1130c7edf4d95d5e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
251983611
pes2o/s2orc
v3-fos-license
On how people deal with industrialized and non-industrialized food: A theoretical analysis “Canned, frozen, processed, ultra-processed, functional” etc. Two hundred years after the beginning of the food industry, industrialized food has evolved with many labels. Every person in the world eats and has different experiences with food that are connected to culture and social relationships which permeate our daily lives in many kinds of situations. Food evokes feelings, beliefs, desires, and moral values. For many people, food not only satisfies hunger and sustains life, but it also brings a delicious pleasure that is with their history, culture, and ancestry. Today's food industry pushes products through its marketing, which promotes a plethora of claims that have now trended proportionally with neophobic dimensions. In reality, the general public lacks objective knowledge about the complex science of modern food technology because of its low transparency, and this has resulted in the appearance of misleading ideas that can prejudice the correct analysis of food values. Given this, education about food is an urgent need. Notably, food scientists, technologists, and engineers must look at eaters through the prism of consumers who are human beings in all their rich social/anthropological diversity. The objective of this article is to explore the elemental anthropologic aspects of foods and how they can affect consumer's trust in the food industry's role. Introduction Food has always played an important role in humanity's development. It was an essential element during the cognitive, agricultural, scientific, industrial, and green revolutions. Since the Cognitive Revolution (circa 70 thousand years ago), Homo sapiens have been able to reflect, change and transmit knowledge to future generations, molding the social, economic, relational norms and values that created cultures (1). In centuries past, especially during the Middle Ages and the colonization period, food was also an impetus for political, economic, and power upheavals (2). Fire has been frequently and exclusively used by Homo sapiens for about 300 thousand years to cook foods, and it is the most ancient thermal treatment (1,2). In the book "Sapiens, a brief history of humankind", Harari (1) states that fire not only changed the molecular structure of food by transforming it into products easily digested, but fire also altered biology and history. The variety of foods and the shortened time to eat and digest them, which fire "cooking" made possible, could explain the larger size of the human brain (3), as well as its shorter intestine (1,4). Fire also powered the development and the diversity of cultures (5). Omnivore feeding was transformed, and the human species went from insignificant animals to thinking beings that eventually dominated the planet and the other species, even though the Homo sapiens were not necessarily the physically strongest (1). Cooking, whether at home (using fire) or in the industry (using saturated water vapor) has been one the most ingenious resources invented by civilization (1). It is the evolutionary act of manipulating and combining components to make food creations of does not exist naturally in nature, such as cheese, yogurt, sauces, pasta, cakes, etc. (2). Industrialized food and the use of heat treatments increase the period of conservation and consumption of food (6) by reducing losses and preventing diseases (7), in addition to permitting more variety and diversity in the food choices. This thermal treatment, i.e., the binomial time-temperature, is one of the main process parameters controlled in a thermal unit operation, which are used to transform all kinds of food into edible food and beverages (meat, grains, and vegetables, coffee, tea, etc.). Sometimes, "in home" or "by industry" processes are only used to change the texture, taste, and flavor of the food, such as stewed or boiled vegetables (8). The application of this unit operation on an industrial scale is relatively recent. Indeed, wars stimulated the industrialized development of food almost 300 years ago. In the 18th century, Nicolas Appert was awarded by the French government for developing a food preserved method that allowed feeding troops during the Napoleonic Wars: The "appertization" (9)(10)(11)(12)(13)(14)(15). Some years later, also in France, Louis Pasteur realized that the method developed by Nicolas Appert (heat application) was capable of reducing the microbiological population in food (9,10), which made food safe to consume and increased their shelf-life (that is, the time needed for food to rot) was lengthened, and consequently, food was able to be safely preserved for consumption for a longer period. Later, Nicolas Appert's and Louis Pasteur's experiments, complemented by the studies of Peter Durand's studies in England and Raymond Chevallier Appert's (Nicolas Appert's nephew) in France, opened the way to thermal treatments such as pasteurizations and sterilizations (Table 1) (10), which are widely used today in the food industry for milk and meat products, tomatoes sauces, canned vegetables, etc. to reduce viable microorganism population into processed food. Thermal treatment drastically reduced food poisonings and deaths from foodborne diseases by reducing the microorganism's population and this allowed expeditions from England to the Artic and the discovery of the Northwest Passage in 1819 (14). Moreover, Europe had a history characterized by food supply chain crises and poisonings, so the possibility of safe food storage was viewed with an enthusiasm that propelled the development of the food industry and food sciences (14, 16). The age of the Industrial Revolution also saw the industrialization of artisanal and homemade foods on a large which allowed employee to stay a longer time outside home, including women (7,(17)(18)(19). This facilitated a revolution in the Food Industry that, in turn, facilitated the migration of rural populations to the urban centers, which propagated many lifestyle changes. With these developments, women, who were traditionally responsible for domestic services, started entering into the labor market (7,20). Given less time for cooking at home, industry also developed labor saving adjuncts like special ingredients and convenience foods, domestic appliances, and read-to-eat food services such as restaurants (19). Although these lifestyle changes were not necessarily instilled by food industries, they did make a major contribution to support it. During the 20th-century, food studies on the molecular level developed the knowledge of emulsion production and stability, the effect of water activity and glass transition in foods conservation, the use of bioactive compounds as food additives, hurdle technology, and new packaging systems, among others. Additionally, process innovations such as drying, extrusion, refrigeration, and freezing (10,18,21) were developed for products such as sauces, mayonnaise, ice cream, pasta, breakfast cereals, among many others (18,21). According to Aguilera (18), technological improvements and molecular studies on oils, fats, sugars, protein flours, and hydrocolloids have brought many applications to domestic and industrial food processing. Many products, flavors, and textures have been created and are now consumed around the world. Eventually, macromolecules have become nutrients, and this has led to food also claiming . /fnut. . functional roles. Furthermore, the 20th century was also marked by the discovery and development of polymers, biopolymers, and food packaging improvements. Both at the industrial and domestic levels, today's foods can be consumed many days after preparation, thanks to processing, packaging and storage technologies based on scientific knowledge generated by a huge amount of high-quality research from Food Science, Technology and Engineering. Although similar to homemade food, restaurants have the same function as the industry: to feed people that do not want or have time to cook (19), but they do not have the shelf-life concern faced by the industry. The work routines in urban regions and the presence of restaurants (franchise or not) increased the population of those who eat outside home, which in the past was restricted mostly to workers during work time or on festive occasions. Nowadays, "eating out" is a more frequent as a leisure time enjoyed with family, friends, or alone (22). With transport development, globalization and the food industry, people can move easily to different cities and countries. Regional foods crossed oceans and were introduced into other diets. Due to technological development, food can be consumed out of season elsewhere (20). For example, Chilean grapes are found in Brazil, and tropical fruits in Europe are available year round (19). Although social and geopolitical concerns are still linked to food, thanks to the food industry and the leap in food production, eating is no longer a privilege but has become a right (20). Nowadays, there is enough food production for everyone globally (7,20). In light of the above, this article provides a brief critical review on food from a holistic viewpoint that reflects human consumption behavior and the eater/consumer relationship with the food supply chain (producers, food industry, and services), regulatory bodies, and political entities. Initially, the discussion will define food functionality in the sphere of relevant professional groups of Food Scientists, Technologists, and Engineers (FSTE) and those scholars responsible for the development of industrialized food. Next will be an examination of foods as social and cultural habits, followed by the different roles that foods hold for human beings, from both physiological and emotional points of view (hedonism, fearfulness, blame, and sense of security). This will include definitions of the industrial, artisanal and traditional foods in society and the understanding and acceptance of industrialized food by the eater/consumer. Finally, an additional reflection about food from mystical and symbolic points in terms of philosophies of life will be examined. It is essential to stress that each of these issues is complex and has been deeply discussed by anthropologists, sociologists, and psychologists in their domain of studies. Notably, this review has a transversal characteristic, in that the summary's focus proposes renewed direction by Food Science, Technology and Engineering (FSTE) professionals to go beyond the technical/economic points of view by focusing more on consumers as human beings. Further to this, the article ends by stating the need and importance for FSTE professionals to be included in all public health debates and classifications. Is it food? Eating occurs in cultural modes (2,5,8,20,(23)(24)(25), and cuisine reflects the cultural, social, symbolic, economic, and history of a population (23). Cooking is food's passage from its natural to its cultural state (5,8,24,25). According to Lévi-Strauss (5,25), culture mediates the relationship between humans and everything surrounding them. To them, the kitchen has its own language, which changes according to society. In cuisine, food is not simply prepared; it is prepared in a specific procedure or another and demands a pan, the cultural element that represents civility. The cuisine defines the human condition in all its attributes, even those that may seem "unquestionably natural" (5). In the "The Culinary Triangle" (Figure 1), Lévi-Strauss (5,25) described that nature and culture are in opposed way mediated by the kitchen. In one aspect, raw food represents nature and is connected to cooked food by the culture which, in turn, finally returns to nature in its rotten condition. This concept has been changing nowadays, for example, the appearance of vegan diets and biological ("natural") foods. In this way, FSTE plays a role similar to that of the kitchen with a better food cooking by controlling technical parameters of unit operations, additives and packaging contributing to prolongs food shelf-life as much as possible before it returns to its rotten condition. Because of food's complexities and cultural values, this kind of change can generate identity conflicts for the eater/consumer (8). The decision to eat is also cultural (1,2,(26)(27)(28)(29)(30). As processing food to provide energy and nutrients to keep the human organism functioning, the Homo sapiens developed many integrated patterns of human knowledge, beliefs, and behaviors about food that are learned, shared, and transmitted across generations, transforming food in culture (26). Culture and consumption are not only interconnected but also inseparable. By helping to make sense of everything that surrounds Homo sapiens, culture determines and controls the criteria and distinction about what is acceptable, marketable and, therefore, capable of consumption (30). In this way, individuals eat what is allowed and accessible to their cultures (8,31,32). People in Asian countries eat dishes prepared with insects (Indonesia, Thailand, Filipinas, etc.) and dogs (China and Korea). Italian and French eat snail and rabbit meat, while this is not common for Brazilians and British, although England and Brazil, among others, consume products from cattle, pigs, and poultry. These and many other rejections occur mainly because of moral aspects are at work in each culture (33). The dog is a life partner to the Brazilians and British, which does not occur with cattle (29), while in India, where the cow is sacred, it cannot be slaughtered and consumed as food (3,8). There is also a difference between the meal and food/foodstuff. Meals are connected to culture (2, 23, 34); however, FSTE understands food from a technical point of view. Food and foodstuff are the products that we can eat, being considered as food those processed at home, and foodstuff, those processed in an industry, independently of their degree of processing (4,18,35). This is what our evolutionary characteristics-dentition, jaw, and bowels-allow us to eat without representing risks to our lives. Nevertheless, some kinds of food can be eaten only after being processed (at home or industry), that is, as foodstuff: rice, beans, corn, potato, and cassava are not consumed as fresh food; however, they are excellent energy sources after cooking and/or processing. Similarly, some foods such as wheat, soybean, olive, and nuts, among others, are raw materials for foodstuff, which means that they are usually eaten only after more complex processing without necessarily using additives (36). Thus, to the FSTE professionals, a meal is what we eat, and food/foodstuff is what can be transformed into meals, independently of being classified as raw material, minimally processed, processed or ultra-processed foods, according to NOVA classification (37). The FSTE professional deeply considers the microbiology, sensory and nutritional quality of raw material, water, ingredients, and final products for development. The focus is to attend to consumer needs by providing satisfaction, pleasure, and nutrition in safe conditions (17,38). There is also the maintenance of the consumer's quality of life, both from a health and lifestyle points of view. FSTE professionals aim to supply food to every person and all lifestyles around the world. To those who like to cook, the industry offers simple ingredients, such as salt, oils, flour, sugar, spices, etc., or more complex combinations such as emulsions, flours, sauces, meat, vegetable extracts, and milk cream, among others. On the other hand, there are convenience products for those who have practical lifestyles (4,8,39). With industrialization, it is estimated that 80-90% of the ingredients and food used in home cooking are at least semi-processed by industry (9,17). All this concern intends to satisfy the consumer, who is a human being shaped by his culture, full of feelings and insecurities. However, FSTE professionals do not explore anthropologic aspects, and this sometimes results in a weak connection between food processing developers and the consumer. Human identity is built by memories, affection, sensorial experiences, and nostalgia (34). Some groups understand food as a product of rituals and traditions materialized during the cooking act. For them, food is more than a simple meal that provides energy and nutrients to the body, rather it is a symbol of their culture, ancestry and part of their identity (20, 24, 40), or in other words, it is performed by practices and relationships that are central to social reproduction (41). Some folks still believe that the feelings experienced from the act of cooking (including the feelings experienced by the slaughtered animals) can be passed on to the food, transforming it into a "blessed" or "cursed" meal. Therefore, from the cultural point of view, food nourishes and the meal has a "soul" (20, 24, 40). To some folks, industrialized food represents a threat to their cuisine tradition and the food cultural heritage (20, 24, 34). In the modern world, practicality can be an imposed necessity (34, 42). For some people, urbanization and industrialization have reduced the steps in cooking preparation and supplanted a pre-processed industrialized food, which has become separated from its natural origin as a commodity (20, 34). Jean-Pierre Poulain, in his book "The Sociology of Food" (20), explains that, in a modern structural context, the individual loses his role as an eater and becomes more of a consumer. Poulain also describes that the food industry has its roots in the familial cooking space, attacking its socializing function, without assuming it. Food and cuisine are elements of collective feelings and belongings (8); although it is technically incorrect and without scientific evidence, it is possible to understand the origin of some expressions like "real food" to mean home processed food. These identity groups have difficulty accepting the inclusion of industrialized food in society due to their moral values and affective memories, which are rooted in their culture (20). Food is identity (2,20). It is even possible to recognize the individual's personality traits throughout the elements that permeate their diet (24). The cuisine is the last aspect that changed during the assimilation process (8). FSTE professionals and the Food Industry as a whole aim to attend to the food demand of Homo sapiens with diversity. In this way, FSTE professionals should thoroughly understand the cultural aspects that permeate the eater/customer. In reality, the human being does not feed on complex molecules; most people feed on habits, . /fnut. . rituals, knowledge, and sensations that this food represents (32). According to Claude Lévi-Strauss, food is not "good only to eat but is also good to think" (24). Food, culture, and social interaction Eating is a way to communicate, and it is part of social relationships (5,22,23,29,31). The act of eating together with others is typical behavior of Homo sapiens. The human being does not come together to eat and drink but to drink and eat together, socially, in an interaction act (2). Eating is a complex phenomenon that includes biological, psychological and social aspects (8,20,24,40,43). More than a physiological need, food is associated with a sociocultural folk's identity (3,24,29,31,40,44). Folk cuisine originates from a historical process and is loaded with singular traditions that, as belonging to a dynamic society, are constantly transforming and changing (44,45). There is a distinction between eating (social action) and nourishing (biologic act) (46). Eating preference is not individual, and it is associated mainly with cultural aspects (8,24,29,40). Food consumption, in addition to nutritional requirements, is influenced by hedonism, moral responsibility, convenience situations (such as vacations, parties, and celebrations) (40, 43,47,48) and lifestyle (likes, working/study hours, leisure time to shop, cook, eat and do the household chores) (23, 24,48). Eating is part of many temporal cycles, whether related to obtaining food (planting, harvesting, production, and availability). It can be fasting (characterized by the absence of food) or festive (when a lot of foods are allowed) (20). These biological and social aspects are marked by many interactions. Eating is the first step in human social learning (29), which evolves into more complex human relationships. Friendships, neighbor relations, and even politics also revolve around food (22, 40). Sharing the meal, especially at home, is the first phase of the group association (2). In childhood, as biological mechanisms emerge they are modulated by these social aspects (breastfeeding, rest and work parent's time). As the child starts eating food in replacement of breast milk, the biological and the social merge to culturally adapt (20, 40). The habits learned during childhood are modified throughout life, primarily as the outcomes of social interactions experienced at school and in professional environments, when the personal identity and the sense of belonging are formed (24). From the Latin habitus, habit means constant willingness to act in a certain way (46). Thus, eating habits represent a contextualized attitude that is regularly and unconsciously repeated and results in an acquired disposition associated with psychological and social meanings, which are difficult to modify after acquisition (33,49). Conversely, food preferences have been transformed into habits and traditions over the centuries, and time is needed to modify them (3,31). With industrial developments and the consequent urbanization processes, society has become less dependent on the harvest cycle (2). The concentration in urban centers changed the food trade and people's relationships over the time (2,29). Today, products are sold at supermarkets (10,20,42) and their prices carry intrinsic value quantified in money. The barter and exchange systems no longer exist. Food is now stored in refrigerators (10), and not preserved in animal fat, salt, or vinegar (2). Time is no longer measured by the sun's movement, and food access is no longer directly dependent on the growth of plants and animals (29). Clocks have become essential (1). With stipulated times to start and stop, the workforce is now rewarded with money instead of actual goods for sustenance. The week has been divided into workdays and days off (1). Women, the traditional keepers of food knowledge and responsibilities for cooking, joined the labor market (20, 42). Communities and families were replaced by the state and markets and religiosity by secularism (1). The evolution of civilization has also changed cuisine habits (20). The floor fire and simple stove have been transformed into gas or electric appliances, which takes less time to cook (50). To protect food and reduce waste nowadays, food is sold inside packaging and frozen in freezers (10), rather than displayed in blocks of snow, fat, or brine (4). Products and regional ingredients have crossed over the geographic barriers (19). With globalization, some cuisine traditions disappeared while others expanded, created, or "fused" in modern terms. For example, potatoes were included in Irish cuisine, tomatoes in North-American, corn and cassava in Africa and Europe, wheat flour in Brazil (29), and Mexican pepper in India (1). Rising from different geographic cultures, foods have hardly kept their original characteristics (2). For example, a sweet drink produced in Switzerland by a local company, if marketed in France, will have the sweetness reduced. In the same way, if the target audience of this company is Italian, Portuguese, or Brazilian, the sugar content probably will be higher than the original one (2). These cultural adaptations can also be exemplified by the coffee that, even from the same brand, has a different flavor in Italy, Denmark, and USA (20). No matter the processing place (industry, home, or franchise restaurants), food will undergo modifications based on the contemporary food habits of where it is eaten. In France, McDonald's franchises offer beer as a drink option; in the USA and Brazil, only non-alcoholic beverages or soft drinks are options. In France, Netherlands, and Belgium, fries are accompanied by mayonnaise, while in the USA it is ketchup, but in Brazil it is both mayonnaise and ketchup, whereas in Quebec (Canada), a sauce and cheese, similar to poutine (20) is popular. Poulain (20), as well as Fischler (24) and Montanari (2), considered that globalization and the market's internationalization will result in culinary compositions and re-compositions; therefore, globalization is not restricted in being a destructive source of regional food and culture. Industrialized food has no symbolic, moral, or ideological value as traditions. Nonetheless, even inside the same culture it is possible to have differences, such as the definitions of the food, the way it is processed, the rules for eating, and even the attached moral values. Thus, besides it being on the stage with symbolic and ideological conflicts, food also identifies boundaries in distinct cultures (20). In this way, culinary traditions cannot be simplified to ingredients or recipes fixed to some place or time (40, 44). To Contreras and Ribas (51), our omnivore deculturization will happen due to food's medicalization, and not only because of food industrialization. The belief that health can be attained just by food choices will transform food into healthy molecules that prevent illness. It is well known that the low consumption of nutritious foods can cause diseases; thus, food can be considered as a source of health. Physiology, hedonism, fearfulness, and blame The primary function of food is to supply energy and nutrients for the maintenance of life. The human being eats to live (31). By definition, diet is the individual's dietary pattern (52). It is a source of health, taste and pleasure and is influenced by culture, geographical localization, religion, and lifestyle (52). On the other hand, when inadequate, diet can be also a source of illness (7,8). Despite increases in food production, people are still hungry, malnourished, and overweight (7,9,(53)(54)(55). Malnourishment and obesity are reflexes of inefficient or wrong food intake, unbalanced by nutritional and caloric points (7,34,53). Access to nourishing food is essential to providing the physiological needs of humans and maintaining life; however, the lack of education about food hampers good health (7,53). In this way, fake news and misinformation can create insecurities and uncertainties related to food intake and may induce anxiety and even cause panic situations (20). Homo sapiens have not yet completely learned to control their brains, their desires nor their reactions (56). When neurons are activated and synapses fire unconsciously, they produce biochemical processes that have been influenced by cultural factors. Desires are not planned; we just feel them. In this context, the external and virtual world-many times unrealcan cause significant damage, such as an obsessive search for opinions, feelings, and desires, which are manifested in the need for social belonging (28,56). The relation between hungersatiety is also influenced by hedonism (29), and the exaggerated concern with diets can cause psychological unbalance, a decreased quality of life, and lower life expectancy (33). In other words, by provoking anxiety in the eater, exacerbated concerns about diet can harm health rather than improve it. For example, North Americans are generally more concerned about diet than the French (especially about health and appearance); however, the French have a healthier diet than North Americans (8,33,57,58). On the other hand, in a recent cross-cultural study, Sproesser et al. (59) analyze 10 countries (Brazil, China, France, Germany, Ghana, India, Japan, Mexico, Turkey, and the USA) with regard to traditional and modern eating, and in contrast to past studies (33,60,61), attitudes to food or potion sizes when it comes to what constitutes traditional and modern eating, USA and France, now appears similar. Additionally, Sproesser et al. (59) also describe that in countries with huge extension (such as Brazil and USA) probably there might be heterogeneity not only in terms of different regions but also with regard to different ethnic groups within one country. Guiding food choices, as presented in the Food-Based Dietary Guidelines (7) by food classification strategies and considerations of food-intake behavior, is extremely complex (62). In addition to accessibility, availability, taste, nutrition, or the consumption situation (such as festive or daily one), there are also emotional, cognitive, psychosocial, and cultural issues (8,24,48). Food choices are specific to the context. The social environment is an essential delimiter of likes and choices (32). Social life is modulated by feelings and definitions of what is allowed/prohibited and even from what is impure (8,31). Impurity is related to blame, gluttony, disgust, and laziness. Gluttony is associated with pleasure in eating. Laziness is a certain discouragement to daily cooking, which can be understood as an aversion to work. Blame and disgust are about whether or not the food is good to eat, but in a cultural judgment, there is no relation with health (31,46). Food is frequently consumed in moral terms due to what the cultural conceptualization regards as good and bad (acceptable/not acceptable), not necessarily or exclusively, taking into account particular likes of individuals such as the taste of the food or even the desire to eat it (2). In this way, a food transgression can imply moral judgment and blame in the eater (8). Blame is also linked to the food ingredients, which can be understood as dangerous to eaters (31). These feelings cause conflicts to the eater that can harm their physical and mental health. In the contemporary world, hedonism has been assumed an emotional rather than a sensory character (30). Most healthy foods are not tasty. In this context, the desire for healthy-eating opposes hedonism. Fresh food is seen as pure, while industrialized food is viewed as artificial (6,47,(63)(64)(65)(66). Recently, psychologists defined "orthorexia nervosa" as the obsession to eat healthy (67,68). According to Bhattacharya et al. (69), orthorexia nervosa describes a fixation on food purity involving ritualized eating patterns and a rigid avoidance of unhealthy foods. Unlike anorexia and bulimia nervosa, orthorexia is related to food quality (in a healthy sense) and not quantity or corporal mass (68). Watchful to the market, some brands are offering food products that meet these customer's needs (31), including rescuing the idea of nostalgia and tradition (70). Nonetheless, cultural and emotional rescue involves the use of terminologies and definitions that . are not yet clearly defined, such as artisanal, traditional, and natural food, which are being specially labeled by food producers (companies or entrepreneurial enterprises) and which can carry can mistakes and misinformation that consequently engender more insecurity, distrust, and anxiety in the eater. Because of this, transparency is fundamental for food industries (10). Food industry, traditional recipe, and fast food Full of ancestry, many cuisines have been changing over centuries. Even in places famous for their traditional heritage, it is hard to find meals with the same taste that were made by past generations. Tradition is mutable; however, the meals carry worldviews (71). If one recipe dies, it will take its vision (2,71). In the modern and globalized world, food preference is divided between the traditional (cultural heritage) and the modern (international, innovative, and practical) (72,73). Products never seen or tried by some cultures have started to appear on supermarket's shelves, restaurants, food events, and over the years, frequently inside homes (8,20). Avocado, guacamole, kiwi fruit, tabbouleh, paella, tacos, pizzas, pineapple, soy source, raw fish, among others regional culture dishes, are present worldwide nowadays in many cultures (8). Montanari (2) describes how Homo sapiens used agriculture to build food-induced post-industrial cultures into a mistaken conclusion that there is fundamental naturality in agrarian activities, usually considered as tradition. There is no definition of natural products (64-66). For example, flour obtained from wheat-present naturally in nature-gives rise to bread that, in turn, does not exist naturally in nature and yet is considered a traditional food in several countries of the world. The same can be considered with the cheeses, wines, and beers of French, Italians, and Germans, respectively. In addition, there is also a mistaken understanding that "more natural" foods are safer (9). This kind of thinking ignores that toxins and pathogens extremely dangerous to life can be naturally present in fresh foods. To Montanari (2), the differentiation between what is naturally in nature and what is obtained from it distinguished human and animal identities and, from the social point of view, originated civilization. Fischler (8,42) reviewed some historical changes in cuisine. In the last century, circa the 1930's, a considerable amount of collective culinary activity was redirected from the kitchen to industry (42). In the past, cuisine knowledge was transmitted essentially from mother to daughter (8). With the functional social changes of urbanization and the advance of industrialization processes, many women entered the workforce. The role of cooking and the perpetuation of cooking knowledge were no longer exclusive to women to teach and learn. Recently, although in lesser numbers, men also have been working in the kitchen (7,8,42). Nowadays, food knowledge (traditional or not) can also be obtained individually, by books, videos or from social relationships that do not necessarily involve family or other feminine authority (8). This reality especially challenges the traditional cuisine producers that, depending on the customer acceptance, have to make minor changes in the recipes to improve health, safety and convenience (74) without losing the tradition and taste. Currently, health issues can overlap the traditional issues (73). Souza Junior (71) relates that in the Candomblé religion, where tradition is valued, it is possible to note the incorporation of industrial ingredients and the rejection of the traditional ones to avoid illness. Although understood as healthier by the lay population, there is no correlation between healthiness, traditional food (73), and industrialized food (75). The Mediterranean diet is considered healthy by the scientific community (76); however, traditional products consumed by these peoples, such as hams, olives, pastries, and cheeses, can have high contents of salt and/or fats (77), as a percentage of energy, total fat content can be as high as 40% with over half being monounsaturated fat (76). Even so, some of them have been classified as ultra-processed foods, which means unhealthy in some Food-Based Dietary Guideline (FBDG), such as Brazil's, which uses the NOVA classification (78). The Mediterranean diet is healthy because of its nutritional biodiversity and moderate consumption, complemented by philosophy of life that values personal relationships, the pursuit of happiness and physical activity (73,79), and not necessarily in the function of the quantity of unit operations that food has been submitted. The Mediterranean diet pyramid has socio-cultural relationships and physical activities on its base, i.e., as a priority even before food choices (79). The Brazilian FBDG, despite using NOVA classification, also orientates people to experience social and pleasurable eating time. Traditional food is made with regular ingredients, following the usual processes of traditional recipes. The tradition involves knowledge, techniques, transmitted values (2), and emotional and ancestral issues (73). There is no official definition of traditional food. Guerrero et al. (74) explained that traditional food can be "a product frequently consumed or associated with specific celebrations and/or seasons. It is normally transmitted from one generation to another, made accurately in a specific way according to the gastronomic heritage, with little or no processing/manipulation, distinguished, and known because of its sensory properties and associated with a certain local area, region, or country". According to the European Commission "traditional means proven usage in the community market for a period showing transmission between generations; this period should be the one generally ascribed as one human generation, at least 25 years". Readers interested in studying the definitions of traditional food are invited to consult Guerrero et al. (74). Tradition is part of the food's cultural heritage (80); however, culture is related to tradition and innovation (2). Nonetheless, . in the contemporary world-practical, international and industrialized-is it possible to have the same food as our ancestors, even by a traditional recipe? Ingredients are everything that is incorporated into a recipe (72). Nowadays, to guarantee food safety, the ingredients have been industrialized. Regardless of the safety issues, could modern ingredients modify a traditional dish? Reconstructing the original recipe is highly ambitious (2). Despite the ingredients, could modernity, viable by domestic utensils (stove, steel or aluminum pans etc.), modify traditional dishes? Cooking is a skill of combinations (2) that, over the years, can proportionate new dishes or newly adapted versions of dishes (2,20,24). As with culture, human taste is not static (2); therefore, the perception of different flavors of traditional dishes can be due to the modification of ingredients, preparation method and taste. In addition, according to Montanari (2), the human organ responsible for the perception of taste is the brain, and not the tongue, and the brain's perception, in turn, is strongly influenced by our culture. Another diet consequence of the modern lifestyle involves time. Stimulated by the accelerated routine and often full of anxiety, people choose food that does not require more time and stress in their decision-making. In this context, fast-food chains have increased worldwide as business model franchises, such as McDonald's, Subway, Starbucks, KFC, Taco Bell, Domino's, Pizza Hut, Dunkin Donuts, Papa John's, Burger King etc. Fast food offers convenience with little tradition (8,42), and other similar franchise-type restaurants now dominate food plazas of modern malls or shopping centers worldwide. This eating style induces people to have meals unconsciously, occasionally alone, to supply their physiological need (hunger). Fast food can trigger "disenchantment with the world" and is defined by sociologists as loss of meaning and devaluation of emotion (72). In addition, the worldwide spread of this North American culture, especially in European countries, has provoked some anxiety and fear of losing national or local identity (42,80). Generally, fast food is eaten with the fingers and without a plate or cutlery, in contraposition to other styles like the French eating etiquette or Asian traditions where a much different set of dining manners are civilized standards. This difference in the manners of eating, independent of what kind of food, can cause a conflict of feelings and moral judgments in the eater (20). Despite being associated with hamburgers and junk foods, this restaurant style provides different kinds of food, such as pizza, national food (Japanese, Korean, Mexican, Arabic, Brazilian etc.), and also traditional homemade like food. In the context of health, more than 1/3 of the worldwide dietary guidelines advise to avoid fast foods (81), but herein lies common conceptual mistakes that lump together fast food, industrialized food and junk food. Industrialized food is processed by a company with industrial equipment at an industrial level. Industrial food is available to the eater/customer by the retail segment and restaurants as well. Fast food is not necessarily industrialized food, although they can use industrialized products for cooking and an industrial philosophy to operate (similar to Fordism) (8). Further, junk food has come to signify low nutritional quality foods (82,83), which may include food processed at industry, home, or restaurants (franchise or not). In a more accurate summary: Junk food depends on the nutritional composition of food; fast food is the restaurant's style, and industrialized food is food that is mass processed by industry (Figure 2). For people who regard traditional foods and moral values as important, industrialized food and fast food are transgressions (20). Nevertheless, one food can be beneficial where another is not, depending on the context. Diet food is healthier for people who suffer with diabetes, but not necessarily to all the population. Regular yogurt can be good for people who do not suffer from lactose intolerance. Fish is good for people who appreciate its taste. Therefore, when food is involved, there is no universal rule. In this way, generalizations are equivalent to misinformation. Sanitary rules-such as the use of pasteurized milk to process all kinds of cheese and derivatives in some countries and a public health policy to avoid foodborne disease-affect the moral and cultural value of food. Cultural heritage and food safety are important to society and contribute to the economy (74). Public health agencies and scholars must find a way to conciliate it. In this context, the Food-Based Dietary Guidelines (FBDG) can be a powerful tool to guide food choices, exploring the country's food and culture diversity, including regionalities, beliefs, and philosophies of life, lifestyles, age group, different identities inside some culture (such as indigenous people), different conditions of life (such as breastfeeding, intolerants and allergic, etc.) among others. This, however, requires more multi and trans disciplinary work. Should I eat it? To Fischler (8,24) and Contreras (84), the omnivore experienced dilemmas that the cow or koala never had. Homo sapiens have a vast variety of foods, taboos, rules, traditions, and beliefs, resulting in conflicting emotions, mainly about the unknown. Neophobia and neophilia are conflicts experienced by humans when faced with an unknown food (47). Neophobia is the fear and rejection of the new, while neophilia is the fear and curiosity about the unknown (8,24,29,47,85,86). In contrast with domestically processed foods, industrialized food causes more rejection and unsafe feelings in eaters (85). When faced with industrialized food, the eater/customer does not know the origin, the quality, and the history of the food (8,24,51). Therefore, food processed at home and a part of the country's culture produces less neophobia and brings tranquility and mainly familiarity to the eater (34, 47, 85). . /fnut. . Industry must inform and be clear about the new product's ingredients and consider their risks and benefits to reduce neophobia and improve eater/customer acceptability (85). With the development of the food industry, from the historical point of view, food security, food safety, and poisonings were controlled and strongly reduced (29). Scientific knowledge about microorganisms, pathogens, and toxins has never been as precise or complete as today. However, despite safety improvements, there is a mistaken perception of risk by the eater/consumer (18,29,70). Although food safety is one of the FSTE professionals' pillars; nonetheless, this concern is not noticed by the consumer. The insecure feeling proportionated by the lack of this knowledge induces people to look for a food that they believe to be safer and healthier (31) as well as to idealize the past (70). Consequently, many entrepreneurs-and even big companies-have emerged selling artisanal or gourmet products that attempt to keep and rescue the traditions and origins (40). Yet, fresh products (fruits, vegetables, and animalsdairy and meat) can be a source of contaminants and diseases (39, 87). To ensure food safety in the industry, technical knowledge and good practices (such as efficient hazard analysis and critical control points-HACCP) and health regulations are primarily used (87,88). Consumption of food, which has been erroneously deliberated for production, can ignite illnesses already controlled that are caused by viruses, bacteria, fungi, and toxins (87). In this context, especially for fresh food, minimally processed food (MPF), non-thermal processes and special active packaging have become effective optional methods to offer safe and fresh products (89). Some literature states that the concept of risk changes according to the culture and history of the population (90,91). Usually for French and Spanish women, pesticides, medicines, microbial contaminations, pollutants, genetically modified organisms (GMO), and epidemics represent a health risk, but these concerns for Brazilian's women are dependent on their social class (90). Industrialized food and chemical components (including food additives) also cause mistrust (29). The chemical products used by the food industry are regulated and monitored by oversight agencies of each country. For many people, however, the government sometimes seems to protect companies (agribusiness, industry, and supply chain) more than the eater/customers (29,34). Disoriented, the consumers then only access media information, which can sometimes exacerbate fears and phobias (31,40). Nowadays, fake news and many possible problems are exaggerated by social media interventions (7). By definition, "a risk" is a possible future adverse effect resulting from human choices and actions (31,49). Nonetheless, sometimes, the risk is not associated with health. For some, the risk of getting fat is related to belonging to an aesthetic standard and not only to avoid diabetes or obesity (8,29). However, exaggerated concerns with diet, aesthetics, and fads can trigger diseases such as anorexia, bulimia (8,53), and orthorexia nervosa (67,68). Within the same culture, the understanding of risk can vary according to gender, social position, values, and beliefs (34, 91). Regardless of the concept, the eater/customer better accepted old or already known risks (47). Frozen foods were not well accepted by the population at the beginning of the 1940's, when the freezers started to be useful in society. Now they are commonplace (29). The consumption decision is according to the balance between the risk perception and the perception of the product's potential benefits (91). The eater/customer feels insecure because they feel forsaken and are no longer willing to trust. Despair, skepticism, and doubt surround the eater during the decision-making (31). Purchase decisions are driven by three motivations: sensory attractiveness, biophysiological and social benefits (prestige and nutrition), and ethics (origin and ideological issues) (32). Barbosa and Campbell (30) describe that consumption and identity are linked; however, . /fnut. . the identity is more connected to the consumer's reactions to a product (feelings and desires rather than necessity) than to the product itself. To Galindo and Portilho (31), it is inaccurate to relate purchase and trust. The purchase represents daily experimentations, permeated or not, by luck. This mistrust results in fear, which can be fed by facts or fake news (31). When a person is scared, rational human capacity is limited (29). Consumer goods are a visible part of the culture (30, 92). Portilho (92) explains that consumption choices are related to belonging experiences that, in some cases, classify the decision made as superior or correct. In this way, consumption and culture are linked to cultural and moral aspects (93). Moreover, consumption is also associated with moral feelings such as "good citizens" or "good parents" and "good family" (94). Industry and the kitchen have the same primary function of processing and preserving foods; however, to some people food processed at home is like the "good mother", purified by the love and familiar ritual, while industrialized food is like the "bad mother" and, therefore, a product of untrustworthy manipulations (8). Moreover, the act of following collective thinking, especially when influenced by concepts of equality, citizenship, and freedom of thought, are the way to achieve "good, fair and happy life" (93). In this way, the understanding of food as nature leads to its idealization, which contrasts with the way most people consider some technologies and even cultural practices. This influence is a new conceptualization of what is good, healthy, and faithful (94). Food is the convergence point of state, corporations, and individuals (94). Distrust of public institutions increases the politicization of consumption (95), in which the individual perceives their consumption as a form of participation in the public sphere to boycot or "buycot" products and brands (93,94). Currently, the customer has migrated into more critical, autonomous, and active behaviors (93,94). Modern consumers assign responsibilities and duties to themselves in the social and environmental context (92, 93). Consequently, during 2010-2017 around 30,000 products introduced ethical, social, and environmental practices on their labeling (9). Despite FSTE concerns about food safety, the feeling of security does not necessarily convince the customer. Although scientific knowledge has never been as voluminous as it is today (29), the concept of risk has never been so mistaken (70). The lack of knowledge about the origin, the process, and the food in general, including the controversial information advertised in the arenas of foods and the traditional and social media, fuels mistrust and moral conflicts. For the eater/customer, the right to access quality food includes the right to make free and well-informed choices, according to each individual's preference (80); therefore, transparency among institutions, eaters/consumers, and corporations becomes a vital factor in contemporary feeding (94). There is no life without food. Regardless of how food is understood, every person in the world eats and has at least a minimum knowledge about food (2). Before the Industrial Revolution, laypeople cultivated food without technical regulation and agency monitoring. In 1850, 90% of the population were landsmen (28), nowadays it is <40% (96). In previous eras, food poisoning and hunger were recurrent and responsible for many deaths, especially in Europe (16) and Russia (14), and were neglected in other countries. Foodborne disease and hunger began to be controlled with the development of the food industry when the thermal process was developed and applied by industry (10)(11)(12)(13)(14)89). During wars, the first people to experience neophobia/neophilia with industrialized food (commercially sterilized food in glass or tin packaging) were soldiers and expeditionary troops (14). Processing turns agricultural commodities into edible, safe, healthy, and nourishing products (97). Processing food in current industry guarantees a standardized, transportable and safe product to consume for a longer period (4,15,36,39,89). However, food acceptance of industrial and later frozen food was slow and surrounded by mistrust. To Cascudo (3), "the food industry reduces the kitchen to a cabinet with cans, where the essential technique is to open the can without hurting the fingers." For Giralmo Sineri, "Canning is anxiety in its absolute state" (2). In addition, widespread speculations without evidence about botulism and chemical contaminants added to food at packaging had intimidated the population to consume it (14). Moreover, despite some canned food being nourishing, the perceived health loss during the thermal treatment raised neophobia (13,16). Currently, commercially sterilized food is widely presented in the market (98); however, now it is not only canned, but also in polymer-based pouches, cardboard-based packages, and glass bottles, as well (13). To be accepted, new foods must be part of the population's habits, have good quality, an affordable price (16), and a short cooking time. It takes a long time to achieve consumer/eater trust and break down the neophilia barrier (29). With the rise of the food industry and despite the diversity of products and packaging, all industrialized food was labeled as "canned" food. Nowadays, terms such as "processed" or "ultraprocessed" food are used to mean industrialized food, both with a pejorative meaning (4,99). However, food processed by industry is nothing more than an adaptation on a large scale of home processed food, and it is made with scientific knowledge and rigorous control (18,35). Meals made at home or restaurants are also processed, but not always with technical control. Fortunately, they are usually consumed just after cooking, which means their shelf-life is not a concern. The Brazilian and the Uruguayan Food-Based Dietary Guidelines (FBDG), adopted by governors as a public policy tool, classified food by their processing level to indicate nourishment (7). The term "ultra-processed" (UP) food (created by NOVA classification), means "not real" food (37) and, despite being classified by processing level, the arguments used for avoiding this food are their ingredients and not their process parameters (4,7,11,15,100). Despite the good intention behind this classification system, and most notably, there is no relation between healthiness and processing levels (75,101). Among those foods classified as UP food, nourishing foods are included (102,103). Moreover, diets without UP food can also be unhealthy (82), and there is still confusion about junk food definitions. Furthermore, the term UP does not exist in Process Engineering terminologies. To the FSTE, a process is a sequence of unit operations (7), and "ultra" means high intensity-such as ultra-high temperature, ultra-filtration, filling ultra-clean, and ultra-efficient, etc.-and not quantity. The NOVA classification was created by health professionals, who are experts in health segments, such as epidemiology, and recognized inside the scientific field; however, they lack expertise in food processing (e.g., unit operations and process engineering). The terms UP and "real food" are misleading (4, 102) and do not help to improve the understanding of healthy food (7). Although the concept of UP foods has certainly entered the consumer consciousness, some mistakes have been made to unequivocally and accurately classify them, as observed by Braesco et al. (104). Still, despite being an industrialized food, functional food has good customer acceptance and is a market trend (105). With a healthy role, functional food provides additional nutritional benefits (86,91). Dominated by probiotic products and functional ingredients that have been developed in all food categories since the 1980's, such as dairy, soft drinks, baked goods, baby-food markets, etc. (105). People have accepted that functional food consumption improves health. Thus, despite the fact that food decision-making is intrinsically related to the historical, social, and cultural context of each country, the association of food and health has disseminated worldwide (90). In the modern world, people are concerned about health and longevity. At the same time, convenience is a need, and the Food Industry is essential to accomplish it (9). Yet, after about 200 years of the food industry's existence and 60 years after Food Engineering became an established field of science, this has not been enough for some people to trust and feel safe with industrialized food. It is a consensus that, if safe from the microbiology and toxicologic point of view, fresh food or minimally processed food should be the main source of nutritious food, but for people who do not can foods or do not want to cook, a quality alternative must exist (103). Furthermore, people lack knowledge about industrialized food, quality, and food safety in general (9), so how can they trust in something they do not know sufficiently? The inclusion of food subjects in basic education, such as food education, food safety, nourishment, good domestic food handling, and sustainability issues, must be considered in a public policy tool (106). Further considerations Some life philosophies aligned to faith understand food as a source of life or contamination. From the religious point of view, food-especially the ones related to rituals-can have spiritual meaning in addition to its nutritional value. For example, Easter eggs represent a new life and resurrection in Christ in Catholicism. The bitter herbs and bread used by Jews on Passover symbolize their periods of slavery and escape from Egypt. Moreover, in their New Year celebration and in a wish for the new year to be sweet, Jews eat honey to be fertile; eat fish to always move on and ahead, and they eat pomegranate seeds so that their good actions are multiplied (56). In the yam (or pestle) celebration of the Egibô kingdom in Nigeria, the cake preparation and consumption represent their survival and splendor, and it signifies means life and death, hunger and abundance, disease and health (Candomblé, an Afro-Brazilian religion). The elements of this ritual are synonymous with strength (71). Furthermore, to Muslim's food can influence the soul, behavior, and moral and physical health; thus, food consumed by them must be Halal, i.e., according to the law of Islam (107). According to Fambras (108) and Jia and . Chaozhi (109), Halal products increase between 15-20% a year worldwide, and it is estimated that the Islam population will represent around 30% of the world population in 2050. According to Junior Souza (71), to Afro-Brazilian religions, especially to Candomblé, food is a synonym of "axé", which means life. To Candomblé, nothing can remain without food, and their correct consumption is related to health maintenance. Food is the source of axé and transmits vitality and heat. When the heat is over, the body dies. In addition, the rituals involved in the food preparation also are important and, if it is performed in an inappropriate way, it could provoke the opposite effect (71). Similarly, in a deep way, food is mystic for Catholicism and represents God. It is God in the mouth. Throughout the ritual, bread and wine become the body and blood of Christ (20, 40, 56). Besides religions, food is also the center of some philosophies of life, such as vegetarianism and its derivations (veganism, flexitarianism, and others) (110-113). These derivations are a consequence of a vast eating lifestyle which either does not include or restricts the consumption of animal food (meat, eggs, milk, cheese and so on) (110). This action is motivated by ethical issues about animal well-being, the environment, and health (110, 111,113,114). Vegetarianism and its derivations are related to identity issues and the individual's personality (111). It is a food intake and lifestyle choice practiced by adults (113). People become vegetarian during adolescence or adulthood. Adhering to this philosophy is a conscious decision, not an imposition (111). Although vegetarianism philosophy is old, scientific studies about its social, ecological, and health consequences are quite recent and need further deepening (110-113). Some supporters of this philosophy report losing weight with a diet without meat. Others consider that this diet can improve health and avoid diseases such as diabetes and hypertension. Furthermore, in comparison to omnivores, vegetarians usually are more concerned about health issues (111,112). No scientific evidence, however, exists to classify vegetarianism as healthier or unhealthier feeding systems (112). The only scientific evidence is about the vitamin B12, zinc and iron absences (110). Philosophies of life are connected to sociocultural issues and identity groups (111). In a multicultural society, all the (food) lifestyles have to be accepted and have space in society. In addition, the ideological movements related to food, besides being an arena of ethical, ecological, and public health discussion, can represent an essential role in the economy. This is a new market to be served generating new business and creating an improved economy. Time to cook and difficulty to find a convenient vegetarian food or vegetarian restaurant are the main barriers described by vegetarians (112). As new business opportunities open, the food market tries to adapt to new demands, both in terms of operating procedures and in the development of new products. FSTE professionals are looking to develop products similar to meet with no animal sourcing. In addition, technologies such as nutritional enriching by nano or microencapsulation have been studied and applied in new products to mitigate possible nutrition losses (115). The FSTE professionals understand that healthy and sustainable food intake is a universal right regardless of religion and philosophy of life. Concluding remarks Food Science, Technology, and Engineering aim to supply quality food to every single person worldwide. Quality is synonymous with safety, nourishment and taste to the professionals in these domains; however, in addition to technical and food safety knowledge, understanding social anthropology is crucial to develop and supply food quality. Eating is a complex and multifactorial issue. A multidisciplinary task is required to have success in reaching this goal. Recently, new issues about healthiness have emerged in society. Food-Based Dietary Guidelines were made worldwide to improve health and quality of life by food-intake and food choices. Nonetheless, the professionals responsible for developing food were not included in this debate, so it is not yet a complete or accurate guideline. To be sure, an egregious conceptual mistake about processing terminologies has been made in the development and use of misleading NOVA food classifications, and these are provoking misinformation and misunderstandings. Practicality is a necessity imposed nowadays. In a dynamic multicultural society, it is impossible to live without the industry presence and accurate scientific technologies to maintain them. Unfortunately, the love of the cooking act is not enough to destroy microorganisms and toxins; unit operations are required. There is no way to move back in society's evolution and change this reality. The FSTE professionals and the food industry are now challenged to reinvent themselves by considering social drivers. Such achievement requires that all the food industry professionals and public policies developers must focus more on the anthropological perspective. Besides its physiological role, food is also an arena of feelings, insecurities, beliefs, and political actions. To improve health, understanding and treat the consumer as a human being is also essential. To be sure, FSTE has substantial scientific knowledge to help industries to guarantee high standard of quality for processed foods. Author contributions AA: bibliographic investigations, writing-original draft, and writing-review and editing. JL: writing-original draft and writing-review and editing. PS: writing-review and editing and supervising. All authors contributed to the article and approved the submitted version.
2022-09-02T13:54:19.688Z
2022-09-02T00:00:00.000
{ "year": 2022, "sha1": "12b4f66fb20274dea664bcb9603e2414eb33539b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "12b4f66fb20274dea664bcb9603e2414eb33539b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
213266863
pes2o/s2orc
v3-fos-license
Research on Investment and Daily Production Limit of Largescale Fracturing Productivity Project of New Vertical Well Large-scale fracturing technology is a new type of fracturing technology developed in recent years along with the fracturing theory research of natural fractured reservoirs. The application of large-scale fracturing in vertical wells has achieved good production results. However, due to the short application time of new technology in oilfields and the low regularity of development indexes, there is no economic evaluation method for shaping. This paper evaluates the development effect of large-scale fracturing in Daqing Oilfield, clarifies the variation law of its development index, and establishes the index prediction method in the evaluation period. Through sorting out the various depreciation calculation methods used in economic evaluation, the appropriate modified production method is selected as the method of calculating depreciation. Based on the relevant data and results of economic evaluation of oilfield development, the economic evaluation method of large-scale fracturing technology productivity project has been formed, and the calculation program has been compiled. The program is used to calculate the investment boundaries of large-scale fracturing blocks with different single well daily production under the condition of multiple oil prices, draw the maps of single well investment and single well daily production boundaries under different oil prices in large-scale fracturing blocks, and analyze the sensitivity of investment boundaries. The research results will be applied in proven reserves submission, unused reserves evaluation and oilfield development planning. Using this method for reference, the boundary maps of conventional water flooding development and large-scale volume fracturing can be drawn, and the research results can be applied to a greater extent. Application of Large-scale Fracturing Technology in Daqing Oilfield Large-scale fracturing technology was applied in 85 wells in Daqing peripheral oilfields from 2013 to 2015, and good development results were achieved. The investment of large-scale fracturing is relatively larger than that of ordinary fracturing, which is 5 times that of ordinary fracturing. The change of index after fracturing is different from that of ordinary fracturing. Therefore, it is necessary to analyze the development effect of large-scale fracturing, clarify the development index change law, and establish the index prediction method in the evaluation period. In order to provide methodological support for economic evaluation of subsequent implementation blocks. From the middle-year oil production in Table 1, the production multiples in 2013, 2014 and 2015 after large-scale fracturing are 3.64, 3.75 and 3.45, respectively, which are more than 3.5 times as high as before. According to the situation of well implementation in 2013, the production rate after 4 years of fracturing is 1.9 times that before implementation, and the effective period of large-scale fracturing can last for more than 8 years according to this rule. The time of two batches of fracturing wells implemented in 2013 and 2014 will be aligned, and the number of sample wells will reach 64 wells, which can better reflect the law of decline. After the implementation of large-scale fracturing, the decline rate will increase by about 8 percentage points, and the initial decline rate will be more than 20%. The daily production of a single well can be stabilized at 2.27 tons by 300 days in the first year after implementation, the water content increases by 10%, and the liquid production rate is 4.75 times as much as that in the first year before implementation. The development effect is better, and it has a good application prospect. According to the development characteristics of Daqing peripheral oilfields, it is clear that the decline law of large-scale fracture net fracturing is hyperbolic decline, the initial decline rate is 35%, the first year is 33% of the production capacity, the second year is designed production capacity, and the third year is converted to 23%. This rule is used as a prediction method of oil production in the evaluation period. Introduction of Economic Evaluation Method for Block Productivity The relationship between economic evaluation methods and parameters should be straightened out, and the methods and parameters should be issued separately. Economic evaluation methods are mainly to standardize the contents and methods of economic evaluation specialty in the early stage of investment decision-making, clarify the concepts of relevant parameters, and have certain stability. The specific value of economic evaluation parameters has a strong timeliness. It needs to be adjusted according to the internal and external situation and the company's development strategy. It is stipulated in the "Economic Evaluation Parameters of Investment Projects", which implements dynamic management and matching use of methods. Total Investment of Oilfield Productivity Projects In the initial stage of project construction, the total investment of the project is mainly the total investment of oil and gas development and construction projects, which should be used for drilling development and oil field capital construction of productivity investment. Oil and Gas Operation Cost Oil and gas operation cost can be divided into four parts: production, injection, treatment and management. The existence of these four parts enables us to carry out cost analysis according to the number of each part. The main expenses included in the process of production operation include production operation expenses. The main expenses included in the process of injection operation include oil displacement injection expenses, downhole operation expenses, well logging and testing expenses, maintenance and repair expenses and other auxiliary operation expenses. The main expenses included in the process of processing operation include oil and gas processing expenses, transportation expenses and management process. The main expenses included are the management fees of factories and mines. Financial Profitability Analysis of Oilfield Productivity Projects According to some economic indicators, we can calculate whether the financial profitability is feasible. Profitability analysis is an important part of economic evaluation. According to the conclusions, we can judge the project. The following three indicators are the most important indicators to be analyzed. Establishment of a typical large-scale fracturing block model To meet the needs of evaluation, a typical block model of large-scale fracturing is established. There are 20 vertical wells in the reservoir block, and no water injection wells are used for water injection. The depth of drilling wells is 2000m, and the productivity of the block is calculated according to 300 days. Because the implementation of large-scale fracturing multi-block internal rate of return is considered at 6%. The actual value of operation cost is 15 years. Drawing of Boundary Layout Using the integrated productivity evaluation program, the investment boundaries of different oil prices and single well daily production are calculated, and the investment boundaries of single well and single well daily production of large-scale fracturing technology development blocks under different oil prices are plotted. Taking $60 as an example, the investment boundaries of each single well under daily production conditions are calculated, and the boundaries are plotted using the calculated results and parameters. The specific data are shown in Table 2 below. When the international oil price is US$50, US$55, US$60, US$65, US$70, US$80, US$90 and US$100, respectively, the investment limits, development indexes and cost parameters of different single well daily production under the condition of 6% internal rate of return are calculated according to the basis of vertical well large-scale fracturing technology. Parameters are used to draw the boundaries of single well investment and single well daily production under different oil prices of large-scale fracturing technology. At present, according to the current investment parameters of the oilfield, the total investment of drilling, infrastructure and fracturing for large-scale fracturing is estimated to be about 6.5 million yuan. The daily production limits of single well under different oil prices are calculated by chart as shown in Table 3. The $70 daily production limit is 2.8 tons. Based on the evaluation results of economic boundaries of large-scale fracturing blocks, the daily output of a single well increases by 0.1 tons at the same oil price, and the change of boundaries investment is shown in table 4. From the result, the higher the oil price is, the higher the increment of investment limit is for each 0.1 ton increase of daily production of single well. This shows that the higher the oil price is, the more sensitive the daily production of single well is to the evaluation results. In the stage of high oil price, increasing the daily production of single well plays a greater role in improving the economic benefits of the block. Conclusion The boundary map can be applied to the submission of proven reserves, the evaluation of unutilized reserves and the formulation of oilfield development planning. It can preliminarily screen whether the blocks meet the economic benefits and reduce the evaluation workload. This method can be used to develop the boundary plates of conventional water flooding development and large-scale volume fracturing, enrich the application scope of the plates, and be applied to a greater extent. In the process of project evaluation, the cost, tax and fee parameters should be selected according to the actual situation of the unit. If the parameters are incomplete, the evaluation data of other similar blocks can also be borrowed.
2019-12-05T09:14:26.131Z
2019-11-29T00:00:00.000
{ "year": 2019, "sha1": "c774f2fc79c52b46ff4f80ce5acafb18cd9a05a1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/384/1/012227", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7015916b01e24814df400f3c6de4d33f2cc1fd66", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Environmental Science" ] }
208301982
pes2o/s2orc
v3-fos-license
Cardiolipin is an Optimal Phospholipid for the Assembly, Stability, and Proper Functionality of the Dimeric Form of NhaA Na+/H+ Antiporter Cardiolipin (CL) was shown to bound to the dimer interface of NhaA Na+/H+ antiporter. Here, we explore the cardiolipin-NhaA interaction both in vitro and in vivo. Using a novel and straightforward in-vitro assay in which n-dodecyl β-D maltoside (DDM) detergent is used to delipidate the dimer interface and to split the dimers into monomers; the monomers are subsequently exposed to cardiolipin or the other E. coli phospholipids. Most efficient reconstitution of dimers is observed by cardiolipin. This assay is likely to be applicable to future studies of protein–lipid interactions. In-vivo experiments further reveal that cardiolipin is necessary for NhaA survival. Although less efficient phosphatidyl-glycerol (PG) can also reconstitute NhaA monomers to dimers. We also identify a putative cardiolipin binding site. Our observations may contribute to drug design, as human NhaA homologues, which are involved in severe pathologies, might also require specific phospholipids. . The NhaA dimer interface. (a) The crystal structure of the NhaA monomer and the interfacial domain between the two monomers of the NhaA dimer are represented according to 17 (PDB.ID: 4ATV); one monomer is shown in colored ribbon, and the trans membrane segment (TM) are numbered by white Roman numerals. The relevant residues are in stick representation. Part of the other monomer of the dimer is shown in a grey line; the TMs are numbered in black, and the relevant residues are shown in line representation. In the NhaA dimer interface, sites where single Cys replacements on TM IX and the β-hairpin loop cross-link 22,26 are marked in dark pink. The membrane is depicted in wheat color. The cytoplasmic funnel (dots) is composed of TMSs II, IVc (c and p denote cytoplasmic and periplasmic sides, respectively), V, and IX. The periplasmic funnel (dots) comprises TMs II, VIII and XIp. The TM IV/XI assembly of short helices connected by extended chains is shown together with the putative Na + binding site (D163, D164). Cyt, cytoplasm; Per, periplasm. (b) The direct contacts between NhaA monomers 17 : W258 and V254 on TM IX of one monomer make a cross interface bridge to TM VII of the other monomer at R204. The two representations were generated using PyMOL with equal notifications. Figure 2. Increasing the DDM concentrations above 0.03% progressively splits native NhaA dimers into monomers. High-pressure membrane vesicles were isolated from TA16/pAXH3 cells expressing His-tagged NhaA, and the protein was affinity-purified and pre-incubated with increasing concentrations of DDM. Then, native gel sample-buffer was added, and the proteins were resolved on native gel. D, gel mobility of dimers. Notably, two bands were occasionally observed in the native gel, most likely due to different protein-Coomassie Blue complexes. M, gel mobility of monomers. The experiment was conducted three times with identical results. Results native nhaA dimers split into monomers in the presence of increasing DDM concentrations. To study the cardiolipin-NhaA interaction, we first looked for conditions that delipidate NhaA. Prior studies have shown that DDM, like other detergents, can delipidate membrane proteins 21 . Moreover, intermolecular cross-linking studies have shown that DDM may affect the integrity of the NhaA interface in particular 22 suggesting that interfacial lipids may be important for NhaA dimerization, such that exposure to DDM may delipidate the interface and split NhaA dimers into monomers. Accordingly, we prepared affinity-purified His-tagged NhaA in 0.015% DDM and pre-incubated it in the presence of increasing concentrations of DDM (Fig. 2). Then, the proteins were again affinity-purified, resolved on native gels, and stained, and the band densities were determined. After pre-incubation in up to 0.1% DDM, only dimers were observed (Fig. 2); however, after pre-incubation at 1% DDM and above, the quantity of dimers progressively decreased, whereas the quantity of monomers increased. Pre-incubation at 3% DDM yielded 85% monomers and 15% dimers, whereas pre-incubation at 5% DDM yielded 95% monomers ( Fig. 3 top panel, first lane). These observations strongly suggest that, as expected, DDM delipidates the NhaA dimers, and that this delipidation extracts lipids that are needed for maintaining the dimers. Cardiolipin efficiently reconstitutes NhaA monomers into dimers. E. coli membranes are composed of ∼75% phosphatidyl-ethanolamine (PE), ∼20% phosphatidyl-glycerol (PG), and ∼5% cardiolipin (CL), where only the latter has been shown to bind in the dimer interface of NhaA 18 . We examined whether addition in vitro of each of these three phospholipids to samples of affinity-purified NhaA monomers, prepared by pre-incubation in 5% DDM, would reconstitute the NhaA dimers. One sample served as a control with no phospholipid addition and contained 95% monomers (Fig. 3, top panel, 0 CL). In parallel, to each remaining sample we added CL, PG or PE in increasing concentrations as indicated by molar ratio (lipid/NhaA). We incubated each sample for 45 min at 4 °C with slow agitation and subsequently resolved the proteins on native gels. Addition of CL to NhaA monomers efficiently reconstituted NhaA dimers Fig. 3. Whereas about 95% monomers were observed in the control sample with no CL, increasing the CL/NhaA molar ratio from 2.5 to 15 progressively reconstituted . Cardiolipin efficiently reconstitutes NhaA dimers from monomers. Samples of monomeric NhaA suspension (12 µL, 1.7 µg protein, 7.6 µM NhaA) were prepared. One sample with no addition served as a control, and to the other samples different phospholipids were added at the indicated phospholipid/NhaA molar ratios: cardiolipin (CL) or L-a-Phosphatidylglycerol (PG) or L-a-phosphatidyl ethanolamine (PE). The phospholipid-containing samples were sonicated for 10 s in a bath sonicator at 23 °C and incubated for 45 min at 4 °C with slow agitation. Then, 15 µL of native gel sampling buffer was added, and the proteins were resolved on native gels. The experiment was conducted three times with identical results. the monomers, ultimately yielding 100% dimers. PG was apparently less efficient: A molar ratio of 15 PG/NhaA yielded 85% dimers. However, at molar ratio twice that of CL (20 PG/NhaA) its effect on NhaA dimerization was similar to CL. PE was much less effective; only 30% dimers were observed at a PE/NhaA molar ratio of 15, and a molar ratio of 20 yielded 50% dimers. Taken together, these in-vitro results clearly show that cardiolipin acts as a key component of NhaA monomers' dimeric assembly. They further show that PG can also reconstitute NhaA dimers from monomers in vitro and its efficiency equals that of CL when the PG/NhaA molar ratio is twice that of CL/NhaA. A mutant that does not synthesize cardiolipin produces a mixture of monomers and few unstable nhaA dimers. To investigate the NhaA-cardiolipin interaction in vivo in intact cells, we used an E. coli mutant BKT12 that does not synthesize cardiolipin 23 and transformed it with a plasmid combination required for isopropyl β-D-1-thiogalactopyranoside (IPTG)-induced expression of NhaA (pAXH3pI Q ). As a wild-type (WT) control we used the E. coli derivative TA16 transformed with the plasmid pAXH3 (TA16 carries I Q on its chromosome) (see Materials and Methods). After the cells were grown and induced in minimal medium, we isolated high-pressure membrane vesicles, affinity-purified the proteins, incubated them in the presence of various DDM concentrations for 10 min, and resolved them on native gels (Fig. 4). The WT (TA16) cells primarily produced stable NhaA dimers: Pre-incubation of the proteins in up to 0.2% DDM yielded 90% dimers and 10% monomers (Fig. 4). In marked contrast, mutant BKT12 produced a mixture of about 80% dimers and 20% monomers already following pre-incubation with 0.015% DDM. The latter dimers were less stable than those produced in the WT cells: Increasing the DDM concentration above 0.015% progressively split them into monomers. For example, in BKT12-produced NhaA, pre-incubation in 0.2% DDM-which hardly affected NhaA dimers expressed from TA16-yielded a mixture comprising 85% monomers (Fig. 4). Hence, in the absence of CL, some unstable NhaA dimers are produced together with NhaA monomers. The mutant BKT12, which does not synthesize cardiolipin and produces a mixture of NhaA dimers and monomers, is salt-sensitive although its everted membrane vesicles exhibit antiporter activity similar to that of the wild type. The growth phenotype of BKT12/pI Q /pAXH3 in the presence of high Na + /Li + selective media was tested in comparison to a positive control (TA16/pAXH3) that expresses WT NhaA, and in comparison to a negative control (EP432/pBR322) bearing the empty vector (Fig. 5a). Under non-selective conditions (LBK), the three strains showed similar growth phenotypes. In the presence of 0.6 M NaCl at pH 7, the positive control and BKT12/pIQ/pAXH3 grew to the same extent. In the presence of 0.6 M NaCl at pH 8.2, however, whereas the positive WT control grew, the mutant could not confer salt resistance, and very few colonies were observed. As expected, the negative control did not grow in any of the selective media. The Na + /H + antiporter activity in everted membrane vesicles of WT NhaA (pAXH3) expressed in a WT E. coli strain (EP432) was compared to its activity when expressed in the strain BKT12, which cannot synthesize cardiolipin (Fig. 6a). Interestingly, although the growth phenotype was different (Fig. 5a), the antiporter activity and its pH dependence were very similar when NhaA had been expressed in both E. coli strains. This apparent discrepancy stems for the following facts: for both tests (growth phenotype and antiporter activity in isolated membrane vesicles) the BKT12 and WT cells are grown on LBK. Yet the WT cells possess the NhaA native dimer whereas BKT12 bears a mixture of monomers and aberrant dimers (Fig. 4). As previously shown 16 NhaA monomers are functional. This is why they show WT antiporter activity (Fig. 6). However, under selective conditions the NhaA native dimer are much better than the monomer in rendering salt resistance of growth as shown in (Fig. 5). Plasmid borne nhaA expresses a mixture of NhaA monomers and unstable dimers in BKT12, a mutant strain that does not synthesize cardiolipin. TA16/pAXH3 cells expressing WT-NhaA and a mutant that does not synthesize cardiolipin (BKT12/pI Q , pAXH3) bearing pAXH3 and pI Q (a plasmidic combination needed for NhaA IPTG control expression) were grown on minimal medium to 0.8 OD 600 , IPTG-induced for 2 h. Then, high-pressure membranes were isolated, NiNTA affinity-purified proteins were isolated, and samples (6 µg protein/7.5 µL) were mixed with equal volumes of native gel sampling buffer, titrated to final pH 7, incubated in the indicated DDM concentrations for 10 min at 23 °C, and resolved on native gel. The experiment was conducted three times with identical results. (2019) 9:17662 | https://doi.org/10.1038/s41598-019-54198-8 www.nature.com/scientificreports www.nature.com/scientificreports/ the putative binding site of cardiolipin. In the NhaA dimer crystal structure 17 an electron density was identified involving the two arginines of TM VII (at sites 203 and 204); these two arginines constitute a contact point between the two monomers of the NhaA dimer, and the electron density has been suggested to reflect the presence of lipids at that contact point (Fig. 1). To investigate whether the two arginines are indeed a lipid binding site, we mutated each arginine to alanine separately and both together and explored the mutants' growth phenotypes on selective Na + /Li + media (Fig. 5b). The growth phenotypes of the arginine mutants (R203A, R204A and R203A-R204A) were very similar to the WT growth phenotype on Na + /Li + selective media. Moreover, in everted membrane vesicles, the Na + /H + antiport activity and even the apparent K m for Na + were very similar to those of the WT at pH 8.5; the only difference observed in the mutants was an alkaline shift of about 0.5-1 unit in the pH dependence of antiport activity (Fig. 6a). These observations suggest that, in vivo and in the membrane, the mutants are quite stable and functional. However, in vitro, the mutant R203A-R204A aggregated during affinity purification, implying that the double-mutant protein is unstable. These results suggest that R203 and R204 comprise the putative cardiolipin binding site. To further support this suggestion, we performed molecular docking study ( Supplementary Information and Fig. S1). The anionic characteristics of CL (due to the presence of two phosphate groups) favored the cationic two arginine moieties, i.e., Arg203 and Arg204. At the N-terminus of TM IX, in close proximity to R203 and R204, reside two additional arginines, R245 and R250 that can also bind cardiolipin (Fig. 1a) 24 ; we mutated these arginines, too, to investigate whether they might contribute to the cardiolipin binding site. The three mutants we produced-R245A, R250A and R245A-R250Awere similar to the WT in terms of the growth they elicited in selective media and their antiport activity in everted membrane vesicles (Fig. 6b). Furthermore, circular dichroism (CD) analysis showed that the R245A-R250A protein was as stable as the WT (Fig. 7). These results imply that R245 and R250 are not involved in the cardiolipin binding site. Discussion Recent mass spectrometry experiments have shown that the phospholipid cardiolipin is bound in the dimer interface between the monomers of NhaA 18 . Here, we explored the cardiolipin-NhaA interaction in vitro and in vivo and showed that cardiolipin is essential for NhaA dimerization and for dimer stability, as well as for NhaA functionality. Cardiolipin efficiently reconstitutes dimers from DDM-delipidated NhaA monomers in vitro, whereas PG and PE are less efficient. We first showed that, whereas affinity-purified NhaA proteins in DDM (0.015%-0.03%) micelles persist as dimers (Fig. 2), increasing the DDM concentration to 1% or above progressively splits the NhaA dimers into monomers. As detergents are known to delipidate membrane proteins 21 , these results strongly suggest that a high concentration of DDM delipidates phospholipids that are essential for maintaining NhaA dimers. We then tested the capacity of each of the three phospholipids that are present in the www.nature.com/scientificreports www.nature.com/scientificreports/ E. coli membrane (PE, PG, and CL, present at concentrations of ~75%, ~20%, and ~5%, respectively) to reconstitute dimers from NhaA monomers produced by DDM exposure. Strikingly, though CL is the least abundant phospholipid in the membrane, addition of CL in vitro to delipidated NhaA monomers (CL/NhaA molar ratio of 15) reconstituted 100% of the monomers into dimers. PG, a major phospholipid was essentially as efficient as CL if we consider that CL is a dimer of phosphatidic acids connected by a glycerol; At molar ratios of 20 phospholipid/NhaA, PG reconstituted 90% of the monomers into dimers as did CL at a molar ratio of 10/NhaA. Nevertheless, the atomic structure of 2PG bound versus 1CL bound can be different and the former less stable. Indeed, this was hinted in the results summarized in Fig. 4. The dimers produced, possibly by PG, in a strain (BKT12) that does not synthesized CL, are less stable than when produced by the WT. PE, the other major phospholipid was much less efficient in the in-vitro reconstitution test (Fig. 3). We conclude that CL is essential for the dimerization and dimeric structure of NhaA, though PG can fulfill this function when its concentration is twice that of CL. Notably, these results also point to a novel means of exploring interactions between oligomeric membrane proteins and lipids that are proposed to be present at their interface: simply testing which phospholipids reconstitute oligomers of delipidated affinity-purified membrane proteins in vitro. www.nature.com/scientificreports www.nature.com/scientificreports/ cardiolipin is required for stability of nhaA dimers. As noted in the introduction, the crystal structure of the NhaA dimer 17 revealed the structural elements that connect the monomers. These include the β-hairpin that forms the β-sheet at the periplasmic side of the NhaA dimer and the interactions between TM VII of one monomer and TM IX of the other monomer at the cytoplasmic side (Fig. 1). Subsequent studies confirmed that each of these elements is crucial for the NhaA dimer structure: mutations in which one or both elements were deleted, Δ(β-hairpin) 16 , Δ(VI-VII), and Δ(β-hairpin, VI-VII)) 16,25 yielded monomeric NhaA. Surprisingly, the three mutations produced different growth phenotypes on routinely used salt-selective media (0.6 M NaCl at pH 7 or pH 8.3 and 0.1 M LiCl at pH 7): The Δ(β-hairpin) mutant grew similarly to the WT 16 , and its Na + /H + antiport activity, measured in inside-out membrane vesicles, was very similar to that of the WT in terms of rate, pH profile and apparent K M for Na + . These observations imply that the functional unit of NhaA is the monomer; indeed, the benefit of the dimer over the Δ(β-hairpin) monomer was revealed only under extreme stress conditions, when the WT dimers conferred salt resistance, whereas the monomers did not 16,26 . In contrast, the Δ(VI-VII) and Δ(β-hairpin, VI-VII) mutants could not grow at pH 8.3 in the presence of 0.6 M NaCl. Taken together, these observations suggest that both the β-hairpin and another factor involving TMs VI and VII are needed for NhaA dimeric structure and stability, and that the latter factor may also be necessary for functionality. According to the results presented herein (Fig. 3), this factor is likely to be cardiolipin. Instead of producing NhaA monomers through mutagenesis of the protein, we used BKT12/pAXH3/pIQ, a host cell that does not synthesize cardiolipin 23 . This strain produced a mixture of NhaA monomers and dimers (Fig. 4). The appearance of the dimers in BKT12/pAXH3/pIQ can be accounted for by the presence of PG in the cardiolipin-less mutant; as shown herein, PG can reconstitute NhaA dimers when cardiolipin is absent (Fig. 3). It would be interesting to investigate a mutant lacking both PG and CL. Notably, the dimers produced in the absence of cardiolipin were less stable than WT dimers (Fig. 4): Whereas affinity-purified dimers of the WT protein remained stable in DDM concentrations of up to 0.1%, the dimers produced in BKT12 were sensitive already to 0.015% DDM, and the proportion of monomers progressively increased as we increased the detergent concentration. At 0.2% DDM 85% monomers were observed (whereas the proportion of monomers for WT NhaA at this DDM concentration was only 10%) (Fig. 4). We conclude that CL is an optimal phospholipid for the stability of NhaA dimers. Surprisingly, whether expressed in EP432 or BKT12 E coli strains the antiporter activity of NhaA in everted membrane vesicles was very similar (Fig. 6a). For preparing everted membrane vesicles the cells were grown in non-selective medium (LBK) and under this condition the activity of NhaA is dispensible so both BKT12 and EP432 expressing NhaA (with or without CL) grew alike. On the other hand, under stress conditions of growth the instability of NhaA deprived of CL compromised growth. Notably, the growth phenotype of BKT12/pAXH3/pI Q was very similar to the growth pattern of the monomeric mutants Δ(VI-VII) and Δ(β-hairpin and VI-VII) (compare Figs. 5a to 2 in 25 ). Specifically, in the presence of 0.6 M NaCl, BKT12/pAXH3/pI Q , like Δ(VI-VII) and Δ(β-hairpin and VI-VII), hardly grew at pH 7 and did not grow at pH 8.2 (Fig. 5a). The common denominator of these three salt-sensitive strains is a lack of CL: CL is not produced in BKT12/pAXH3/pI Q , and the CL binding site is likely to be abrogated in the mutants deleted of TMs VI-VII (see below). Very few studies of the interaction between transporters activity and lipids have been done 27,28 , recent extensive study on lipids -melibiose permease (MelB) interaction, using a BKT strain, showed that CL is not needed for MelB folding, stability, and activity 27 . Unlike the dimeric NhaA, MelB is a monomer. Therefore, these results may support the idea that CL is involved for a dimer formation. the putative binding site of cardiolipin. Given that cardiolipin is present in the NhaA dimer interface, the NhaA dimer crystal structure may provide clues regarding the location of the cardiolipin binding site 17 (Fig. 1a,b). Specifically, the dimer structure contains a large pear-shaped space that separates the two monomers; this structure has an apex at L255 of TM IX and is suggested to be full of lipids 17 . L255 and residues above it toward the cytoplasm are in close proximity to their twins on the other monomer, and single-Cys replacements of these residues have been shown to form intermolecular cross-linking across the dimer interface 22 (Fig. 1a). Interestingly, the twin L255C is likely to be involved in a rigid intermolecular interaction, as it cross-linked only with the short cross-linker 1,2-ethanediyl bismethanethiosulfonate (MTS-2-MTS) (spanning 5 Å) or formed an S-S bond 22 . Toward the β-hairpin-loop (Fig. 1a,b) at the periplasm, W258 on TM IX of one monomer makes a bridge to TM VII of the other monomer, and V254 on TM IX of one monomer interacts with R204 of TM VII of the other monomer. Thus, the dimer interface above and below the pear-shaped interspace is bordered (or sealed) by interactions between TMs (VII and IX of each monomer). Where, then, does the CL interact in the dimer interface? CL with its two phosphates is a potential anion 29 . It was demonstrated recently that CL head-group is fully ionized as a dianion and CL behave as strong dibasic acid within the physiological pH 30 . Therefore, it is likely that positively charged residues contribute to its binding site. As noted above, in the NhaA dimer crystal structure 17 , an electron density was identified in the vicinity of the four arginines at the TM VII-TM VII contact point of the two monomers (two arginines in each monomer; R203 and R204; Fig. 1) 17 , and this density was later suggested to indicate the presence of CL 31 . In in-vivo experiments, alanine replacements of these arginines, R203A, R204A and R203A-R204A, grew on selective media similarly to the WT (Fig. 5b), and in membrane vesicles the only difference observed with respect to the WT was an alkaline shift in Na + /H + antiport activity (Fig. 6b). In contrast, in vitro, the protein R203A-R204A aggregated during affinity purification. In other words, whereas the mutant lacking the arginines R203A and R204A is functional in vivo, the protein is unstable in vitro. This difference can be accounted for by the fact that PG, which can constitute NhaA dimers (albeit less efficiently than cardiolipin see above) was present in our in-vivo experiments but not in our in-vitro experiments. The involvement of Arg203 and Arg204 in the putative CL binding site was supported by using molecular docking study (Supplementary www.nature.com/scientificreports www.nature.com/scientificreports/ Information and Fig. S1). Taken together, these results imply that the TM VII-arginines contribute to the CL binding site, which is needed for dimerization and stability of NhaA. We also investigated whether residues R245 and R250, which are in close proximity to the putative cardiolipin binding site (Fig. 1a) 24,31 , might also contribute to cardiolopin binding. We determined that they do not: Mutants R245A, R250A, and R245A-R250A grew on selective media and exhibited antiport activity similar that of the WT (Fig. 6c). Furthermore, the affinity-purified protein R245A-R250A was stable and showed CD spectra similar to those of the WT (Fig. 7). the role of cardiolipin in nhaA functionality. Our in-vivo experiments, in which CL-less NhaA did not confer salt resistance, indicated that CL is important for NhaA functionality. Yet, our previous studies of the monomeric mutant NhaA-Δβ revealed that the NhaA monomer is the functional unit of NhaA 16 . What, then, is the functional role of cardiolipin in NhaA? First, it is important to note that several studies have indicated that functional interactions take place between NhaA monomers 14,32 . Particularly compelling is the observation that mutations of residues in or near the regions proposed herein to constitute the CL binding site (Fig. 1) affect NhaA Na + /H + antiport activity 14 . Specifically, the following effects were observed: (a) Inter-molecular cross-linking of a single NhaA-V254C mutant by a rigid cross linker dramatically changed the pH profile of cysteine-less NhaA/V254C, whereas a long and flexible cross-linker had no effect 14 . (b) Chemical modification of L255C increased the apparent K m for Na + 6-fold and changed the pH profile by 1 unit to the alkaline side 22 . (c) Cys replacements of the potentially positively-charged residues in this segment of NhaA, K242C, R245C, K249C, R250C, and H256C caused an increase (4-10-fold) in the apparent K m for Na + 22 . Interestingly, K249 is the only trypsin-cleavable site of NhaA 33 , and the pH profile of the trypsin cleavage reflects the pH dependence of the antiport activity 34 . Recent experiments that we have carried out may shed additional light on the functional role of CL in NhaA. We adapted hydrogen/deuterium-exchange mass spectrometry (HDX-MS) to identify global conformational changes in NhaA upon Li + binding at physiological pH 35 . Our analysis revealed a pronounced Li + -induced deuterium uptake-change in TMs IX at the CL binding site. It is possible that, for this conformational change, CL may be required. Hence, the observation that the NhaA monomer is a functional unit does not preclude the possibility that CL is needed for functionality, and moreover, CL interaction in the dimer may contribute to fine tuning of antiport activity. This proposition is consistent with the observation that genes for CL synthesis are also upregulated in E. coli under salt stress 36 , implying that CL might have a regulatory role. Thus, the presence of CL in the dimeric interface between the NhaA monomers is not related to the structural fold, nor is it necessary for Na + /H + antiport activity; rather, its primary role seems to be conferring stability to the dimer and regulating conformational changes. Indeed, as noted above, CL has also been identified in the dimer interface of LeuT 18,20 , a transporter whose structural fold is different from that of NhaA 8 , yet that shows similarly low dimeric stability. In contrast, NapA, the Na + /H + antiporter of Thermus thermophilus, which shares the NhaA fold, does not specifically bind phospholipids, and shows high dimeric stability 20 . The stability of NapA stems from its structure, an additional N-terminal helix that is absent from NhaA and that strengthens the monomermonomer interface. Taken together, these observations suggest that different membrane proteins use different 'tools' for preserving their oligomeric states and regulating conformational changes: Some rely on structural elements that ensure tight contact between subunits, whereas others recruit lipids for this purpose. These insights raise the intriguing question of whether the human NhaA homologues that are involved in disease also require specific phospholipids. If they do, their phospholipid binding sites may be effective drug targets. Cells were grown either in Luria broth (LB) or in modified LB (LBK) in which NaCl was replaced with KCl. The medium was buffered with 60 mM 1,3-Bis[tris(hydroxymethyl)methylamino]propane (BTP). For plates, 1.5% agar was used. To test cell resistance to Li + and Na + , EP432 or TA16 cells transformed with the respective plasmids were grown on LBK to A 600 of 0.5. Samples (2 µl) of serial 10-fold dilutions of the cultures were spotted onto agar plates containing the selective media: modified LB in which NaCl was replaced with the indicated concentrations of NaCl or LiCl at the various pH levels and incubated for 2 days at 37 °C. Site-directed mutagenesis. Site-directed mutagenesis was carried out according to a polymerase chain reaction-based protocol 38 with pAXH3 as a template. All plasmids carrying mutations are designated by the name of the plasmid followed by the mutation. In-vitro reconstitution of nhaA dimers from monomers by cardiolipin. To obtain NhaA monomers, affinity-purified NhaA in the dialysis buffer was diluted 3 folds into potassium phosphate (KPi) buffer containing 100 mM KPi (pH 7.5), 5 mM MgCl 2 and 5% DDM. The mixture was incubated at 4 °C for 90 min in Eppendorf thermomixer (Comfort) with slow agitation (300 rpm). Then, the protein was loaded on NiNTA, thoroughly washed in KPi buffer containing 0.015% DDM (X3 times 1. 5 mL) to get rid of the high detergent concentration, and eluted in elution buffer. Then, samples of the monomeric eluted suspension (12 µL, 1.7 µg protein, 7.6 µM NhaA) were prepared. One sample with no addition served as a control, and to the other samples different phospholipids were added at different phospholipid/NhaA molar ratios: cardiolipin (Avanti, 841199), PG, (Avanti, 841188) or PE (Avanti, 840027). The phospholipid-containing samples were sonicated for 10 s in a bath sonicator (Laboratory Supplies Co. USA Model G 112 SPIT, 600 V/Ts, 80KC, 5 Amp) at 23 °C and incubated for 45 min at 4 °C with slow agitation. Then, 15 µL of native gel sampling buffer was added, and the proteins were resolved on native gels to analyze the percentages of monomers and dimers. isolation of membrane vesicles and assay of na + /H + antiport activity. EP432 cells transformed with the respective plasmids were grown in LBK medium, and everted membrane vesicles were prepared and used to determine the Na + /H + antiport activity as described previously 44,45 . The assay of antiport activity was based upon the measurement of Na + -induced changes in ΔpH as measured by acridine orange, a fluorescent probe of ΔpH. The fluorescence assay was performed with 2.5 ml of reaction mixture containing 50-100 µg of membrane protein, 0.5 µM acridine orange, 150 mM cholineCl, 50 mM BTP, and 5 mM MgCl 2 , and the pH was titrated with HCl as indicated. After energization with D-lactate (2 mM, pH7 titrated by KOH), quenching of the fluorescence was allowed to achieve a steady state, and then Na + (10 mM) was added. A reversal of the fluorescence level (dequenching) indicates that protons are exiting the vesicles in antiport with Na + . As shown previously, the end level of dequenching is a good estimate of antiport activity 46 , and the concentration of the ion that gives half-maximal dequenching is a good estimate of the apparent K m for Na + (or Li + ) of the antiporter 46,47 . The concentration-range of the cations tested was 0.01-100 mM at the indicated pH values, and the apparent K m values were calculated by linear regression of the Lineweaver-Burk plot.
2019-11-27T16:06:59.444Z
2019-11-12T00:00:00.000
{ "year": 2019, "sha1": "239a58b1a0e9da341b4250c5a6749b309992cd11", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-54198-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "239a58b1a0e9da341b4250c5a6749b309992cd11", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
218898810
pes2o/s2orc
v3-fos-license
Geographic availability and accessibility of day care services for people with dementia in Ireland Background Day care is an important service for many people with dementia and their carers. In Ireland, day care services for people with dementia are delivered by a mix of dementia-specific day care centres as well as generic day care centres that cater for people with dementia to various degrees. In this paper we examine the geographic distribution of day care services for people with dementia relative to potential need. Methods Using a national survey of day care centres, we estimate the current availability of day care services for people with dementia in the country. We use geographic information systems (GIS) to map day care provision at regional and sub-regional levels and compare this to the estimated number of people with dementia in local areas. Results There is significant variation across the country in the existing capacity of day care centres to cater for people with dementia. The number of places per 100 persons with dementia in the community varies from 14.2 to 21.3 across Community Health Organisation areas. We also show that 18% of people with dementia do not live within 15kms of their nearest day care centre. Conclusion Currently, day care centres, in many parts of the country, have limited capacity to provide a service for people with dementia who live in their catchment area. As the number of people with dementia increases, investment in day care centres should be targeted to areas where need is greatest. Our GIS approach provides valuable evidence that can help inform decisions on future resource allocation and service provision in relation to day care. Overall, people with dementia who attend day care services typically express high degrees of satisfaction [6][7][8]. Attending a day care centre provides the opportunity for social interaction and a sense of structure and routine [9,10], and day care has been shown to provide people with dementia with a range of benefits. These include: increased wellbeing [7,11]; better sleeping habits [11,12]; reduced neuropsychiatric symptoms and use of psychotropic drugs [12,13]; and, reduced family carer stress [14,15]. Having access to day care can also improve the relationship between the carer and the person with dementia by providing time apart and facilitating employment [5]. Demand for day care services for people with dementia is influenced by a number of factors, particularly personal preferences and the appropriateness of the provision to the person's needs. People with dementia are often reluctant to go to a day care centre, which poses a dilemma for care givers [16,17]. Many will only seek day care late in the condition when cognition and function have declined significantly [18]. In addition, there are also significant drop out rates among people with dementia with severe behavioural problems and depression [19]. Centres that cater for people with dementia may also have restrictions on enrolment; incontinence and disruptive behaviour are cited as the most common restrictions in the US [20]. Not surprisingly, therefore, day care utilisation rates among people with dementia tend to be low [1]. Similar to other countries, day care for people with dementia in Ireland is delivered though a mix of dementiaspecific centres and older persons centres which typically provide for a small number of people with dementia as part of a generic service [20,21]. Because of this, and limitations in data collection infrastructure in day care centres, a key challenge is in identifying the current provision of day care services for people with dementia in the country. The national survey data reported here, the first survey for Ireland combining national data on location and attendees, takes on a particular significance in this regard. Furthermore, while tools for assessing geographic accessibility have been developed for a range of health services [22], little is known about the geographic accessibility (distance to a day care centre) or availability (supply of places) of day care services for people with dementia. This paper uses geographic information systems (GIS) methods to provide evidence for policy and service planning decisions relating to the allocation of resources in the day care sector, though the approach could be equally applied to other services and other jurisdictions. We take a multi-level approach to the resource allocation problem of identifying the parts of the country with the greatest need. To assist in resource allocation at a national level, we examine disparities across administrative health regions (Community Health Organisations, CHOs) in the availability of day care for people with dementia. We then look at availability within each of these regions to identify sub regions (Community Health Networks, CHNs) with the lowest availability. Finally, we look at access and population density to assist local decision makers in identifying preferred locations for future investment in day care infrastructure within each CHN. Day care Centre survey Data on day care centres are sourced from a national survey of Irish day care centres carried out by the Health Service Executive (Ireland's national health service) in 2018. This survey defined a day care centre as follows: "Day Care Centres are open for a minimum of five hours per day for at least one day per week. All clients are referred to the service by healthcare professional e.g. PHN or Primary Care Team Member, usually requiring completion of referral form. Some dementia-specific centres may accept a referral from appropriate health professional or family. Attendance at the Day Care Centre service is open-ended and usually long term". This definition positions day care centres between social activities, such as active retirement groups, which are accessed for fewer hours, and day hospitals where attendance is short term with a specific assessment, treatment or rehabilitative objective. The Health Service Executive had a list of day care centres which it funds; this provided the main population frame for the national survey. Services operated by the Alzheimer Society of Ireland (ASI) or Western Alzheimer were identified by cross referencing with the ASI/National Dementia Office audit of dementia-specific services [23]. The survey also incorporated data from a similar regional (Cork and Kerry) survey of 41 centres carried out in May 2016. The survey questionnaire was initially piloted by a number of day care centres and amended based on feedback from the pilot. The final survey questionnaire profiled day care centre service activity for 1 week between April 30th 2018 to May 6th 2018 and included questions on the total number of places and actual attendances on each day of that week. The key respondent was the administrator/manager of the day care centre. The survey included questions on the number of places for people with dementia on each day. In addition, the survey gathered information on characteristics of clients, including age category, gender, dementia status and dependency. The dementia status of clients, where available, was based on the report of the respondent and not necessarily based on a validated diagnosis of dementia. Dependency status was predominantly based on a Barthel Index assessment [24]. A comment box allowed respondents to specify the policy in relation to people with dementia attending the day service. The characteristics of the building in which the centre is located and the organisation and funding of the service were also covered in the survey. A total of 317 day care centres for older people were identified, all of whom responded to the survey. The lack of data collection infrastructure in many of the centres meant that the survey placed a significant demand on personnel running the centre who have little expertise in data collection. However, significant effort and persistence on the part of the survey team, such as through follow up calls, resulted in the 100% response rate. A number of questions in the survey had incomplete or missing data. This was particularly the case in completing client data on age and physical dependency and reflects the generally poor infrastructure for data collection in day centres. Identifying categories of day care provision In order to identify day care places for people with dementia, data from the survey was used to categorise day care centres into four mutually exclusive groups, namely: 1. Dementia-specific day care centre; 2. Dementia-specific days within generic centre; 3. Dementia within generic day care centre; 4. Centre with no dementia activity recorded. First, a centre was categorised as a "dementia-specific" centre where: i) the centre was operated by the Alzheimer Society of Ireland (ASI) or the Western Alzheimer Society (collectively referred to as 'Alzheimer Societies' throughout); ii) qualitative comments in the survey indicated that the centre was dementia-specific; iii) over 4 days per week were dementia-specific; iv) all clients were identified as having dementia. A small number of dementia-specific centres were co-located with generic day centres for older people. These centres are typically run by the Alzheimer Society of Ireland for one or 2 days a week, with the generic centre operating on other days. In these cases, where two surveys were returned, the centres were recorded as two separate entitiesone a dementia-specific centre, and the second a generic day centre. In the second category, a centre was designated as "dementia-specific days" where: i) the generic centre specified certain days where all places were dedicated for people with dementia and no other people attended, while on other days there was a mix of people with and without dementia; ii) qualitative comments in the survey indicated that the centre operated dementia-specific days. The third category occurred when people with dementia attended generic day care services, attending alongside other users, but not generally receiving dementia-specific attention. Finally, centres were categorised as "no dementia recorded" where: i) qualitative comments reported that the centre did not take people with dementia or that the centre did not have any people with dementia attending; ii) there were no records of dementia-specific days, dementia places or attendees with dementia. While these centres may not have had an explicit policy of not accepting people with dementia, there was no indication from the data that they did cater for people with dementia. Assessing the capacity of day care centres to cater for people with dementia The aim of this study is to identify geographic variation in the provision of day care for people with dementia to assist in the planning of community services for people with dementia. We generate an estimate of the number of weekly day care places that are available to people with dementia, irrespective of whether these are provided in a dementia-specific centre or a generic centre for older people. We examine capacity on a weekly basis taking into account the variation in the number of days that centres open; some centres open 5 days a week while others only open for 1 day a week. Therefore, the weekly number of places is the best comparator of service availability. For example, a centre with ten places that runs for 3 days a week is deemed to have 30 weekly places. Due to incomplete data in the surveys, the capacity of centres to cater for people with dementia is not directly evident. To generate an estimate of the number of people with dementia that a centre caters for, a set of rules were used to compile responses to the relevant questions in the survey. A separate approach to estimating the number of weekly places for people with dementia is adopted for each category of day care centre. For dementia-specific centres, all weekly places are deemed to be dementia places. While this is accurate for Alzheimer Society centres that require a diagnosis for people to use the services, the approach may overestimate the number of dementia places available in a small number of other centres, as they may not require a diagnosis to use the service. For centres categorised as having dementia-specific days, the number of places available on dementia specific days is used as the estimate of the number of dementia places. For centres that cater for people with dementia within a general service, we sought to identify the proportion of the service being used by people with dementia. This was identified from the data on the dementia status of individual attendees. While there was a significant level of incomplete data in this variable, it provides a basis for estimating the extent to which the service was directed towards people with dementia. This approach was augmented with qualitative comments in the survey that identified the number of people with dementia and the number of places specifically designated for people with dementia in the generic service. Spatial analysis Geographic areas We examine the availability of day care for people with dementia at two different geographic levels. First, there are nine CHO areas in Ireland with an average population of 529,000 people. This is an important administrative area for community health resource allocation decisions. The second level we examine are CHN areas, of which there are 96, and there are typically around 10 CHNs in a CHO. These areas have been recently delineated and are likely to become increasingly important in the allocation of community resources in the coming years. The estimated number of people with dementia living in the community in CHN areas ranges from 104 to 700 across Ireland, based on estimates for 2016 [3]. To examine accessibility, we use a smaller geographical areaelectoral district (ED) areasto estimate distances from day care centres to the population with dementia. There are 3441 EDs in Ireland, which have an average population of 1397 and cover an average area of 20 km 2 . All spatial and data analyses were performed within a GIS environment (ESRI® ArcGIS® ArcMap™ version 10.2) and used ungeneralised (high-resolution) administrative boundary shapefiles for Ireland from the Central Statistics Office. Geographic variation in availability of day care To compare the availability of day care across geographic areas we divide the estimated number of day care places for people with dementia by the estimated community dwelling dementia population. The estimated number of people with dementia living in the community in each CHO and CHN area is based on national estimates [3]. While these estimates do not take into account local variation in the proportion of people with dementia living in nursing homes, they provide a population base for comparing the availability of day care across geographic areas. We first compare the variation in availability across the nine CHO areas. We then compare the availability across the CHN areas. While the comparison of the provision of day care services across the CHN areas is complicated by day care centres that are located at the edge of an area, this approach, when visualised on a map, provides a straight-forward method of indicating areas with potentially low availability. Day care centres are typically accessed by a bus collection service which collects clients from their homes and brings them to the centre. This limits the catchment area from which people can access the service, as overly long journey times are generally not acceptable [7,25]. Travel time has been shown to be an important determinant of day care utilisation in a number of previous studies [16,[25][26][27][28]. To show the areas of CHNs where access is likely to be more difficult, we identified EDs where the centroid (geographic centre) was more than 15 kms along the road network from a day care centre; this represents the outer boundary of acceptable journey times for a bus collection service. Results The survey shows that there are at least 14,193 unique individuals who typically attend day care centres for 1 to 3 days per week. Three-quarters of attendees are categorised as over 65 without dementia; 5% of attendees are identified as being under 65 years of age; and 20% of attendees are identified as over 65 and having dementia (a total of 2805 individuals with dementia). This represents between 8 and 14% of people with dementia in the community [3], though this is likely to be an underestimate of the number of people with dementia attending day care services due to the under diagnosis of dementia in the population generally and incomplete data in the survey. As described above, we categorised day care centres into four types: dementia-specific centres; centres with dementia-specific days; centres that provided for people with dementia as part of a generic service; and, centres where no dementia places or cases were recorded. Out of a total of 317 centres, 245 centres (77%) provide places for people with dementia to some extent. Overall, these centres provide a total of 5969 weekly places for people with dementia. There are 58 dementia-specific day care centres across the country. These centres, mostly run by the ASI (80%), account for 46% of the estimated number of dementia places in the survey. A small number of generic centres [16] have dementia-specific days. These centres provide 5% of dementia places. There are 171 centres that cater for people with dementia within a general service. These centres account for 49% places for people with dementia. A substantial minority of centres (72) did not record any dementia places or cases in the survey. Reasons reported in the survey for not taking people with dementia include an unsuitable building or a lack of appropriate staff. Table 1 shows the estimated weekly dementia places per person with dementia living in the community by CHO area. Nationally, there are 16.7 places per 100 people with dementia living in the community. However, the table shows a high level of variation in the availability of day care for people with dementia across CHO areas. For example, in CHO 1 there are 14.2 weekly places per 100 people with dementia living in the community, compared to 21.3 per 100 people in CHO 9. These figures show the weekly number of places; the number of unique users will be determined by the number of times per week that individuals attend. Within each of these CHO areas, day care services that cater for people with dementia are not evenly distributed across CHN areas. Figure 1 shows the geographic variation across CHNs in the number of weekly dementia places per 100 persons with dementia in the community for Ireland and the Dublin region. Specifically, the maps show the quintiles (5 equal groups) of the 96 CHN areas based on the number of dementia places. So, for example, the areas in the bottom quintile (shown in blue) have a day care availability of fewer than 5.0 weekly places per 100 people with dementia in the community, while the areas in the top quintile (shown in red) have more than 25.6 weekly places per 100. Some of the areas with low availability, particularly in urban areas, are adjoined by areas with very high availability, though many are not. This approach shows the parts of the country that have more places than others. In addition to availability, we also look at the geographic accessibility of day care services for people with dementia, in particular the distance people with dementia live from a centre. This is particularly important in rural areas where service locations are further apart. Figure 2 (left) shows the EDs (in red) whose centre is more than 15kms from a day care centre that caters for people with dementia. Overall, an estimated 18% of people with dementia live in these areas, which are likely to be beyond the range of a day care bus service. Overall the map shows that while most areas of the country are within the catchment area of a day care centre that caters for people with dementia, there are substantial parts of the country where geographic accessibility is likely to be a significant issue. While Fig. 2 (left) provides a good indication of geographic accessibility to day care centres, it is also important to consider population density, since it could be the case that areas with poor accessibility also have relatively low potential need for a service. Thus, Fig. 2 (middle) presents information, again by ED, on the numbers of people aged over 65 years per square km. Figure 2 (right) then combines both maps to highlight the population density in the areas with poor accessibility. It shows, for example, areas in blue and green that have poor access to services but also have low population densities. On the other hand, areas in red and orange have poor accessibility and relatively high population densities. The majority of EDs with poorer accessibility to day care services for people with dementia also have low population densities. However, it is the areas with poor accessibility and high population densities that may be of most interest to government. Illustrative example: the local allocation of investment To provide an example of how this approach can be used at a local level for service planning, we look at a hypothetical decision to develop the provision of day care for people with dementia in one CHN area, North Kerry, shown by the black boundary in Fig. 3. At present, day care provision exists in two of the main towns, Listowel and Castleisland. Listowel has 75 generic weekly day care places and 30 dementia-specific day care places, while Castleisland has 140 generic weekly day care places but no dementia-specific service. So what can a GIS analysis tell us about where any potential new day care centre should be located? Although the area surrounding Listowel has relatively good geographic accessibility to dementia day care services (Fig. 3 left), it has comparatively high population density and potential demand; there are an estimated 167 people with dementia in the 15 km catchment area (Fig. 3 right) based on prevalence data estimated using the 2016 CSO census of the Irish Population and the international literature on prevalence [29]. In contrast, the area surrounding Castleisland has poor accessibility to dementia day care ( Fig. 3a) but has comparatively lower population density and lower demand than Listowel, with approximately 89 people with dementia in the 15 km catchment area (Fig. 3 right). Thus, this information, derived from a GIS analysis such as presented in Fig. 3, can provide valuable information to aid planning decisions linking existing availability, population density and dementia prevalence. However, it is important to stress that other factors may also influence the final decision on the location of new day care centres. For example, day care centres in neighbouring CHNs may provide accessible places for people on the outskirts of this CHN. In addition, the availability of other services, for example a well-functioning Alzheimer café in either of the towns, may mean that demand for day care is not as high as the other town. Nonetheless, the GIS information provides vital objective information which can contribute to the decision. Discussion This paper uses GIS methods to provide evidence for policy decisions relating to the allocation of resources for day care provision in Ireland. A significant investment in day care services is required to maintain or increase the current service for people with dementia, due to the increasing number of people living in the community with dementia. The estimated average growth rate in the number of people with dementia in Ireland is 3.6% per year to 2030 [4]. Thus, just to maintain the current level of service, an estimated 4433 new weekly dementia day care places will be required by 2031. Prior to this research it has not been possible to identify variation in the availability and accessibility of day care services for people with dementia across the country. Therefore, our GIS approach provides valuable evidence that can help inform decisions relating to resource allocation and service provision in this sector in the future. In some cases, increased capacity could be achieved by increasing the number of days that current facilities are providing. In other cases, where buildings are unsuitable or the current facilities are at maximum capacity, new facilities will be required. An important issue to be considered is whether additional capacity should be provided through dementia-specific centres, dementia-specific days in generic centres, or through a good quality generic day care service. Unfortunately, there is little evidence on which of these is the best model of delivery [30,31]. A number of respondents to the survey commented that it was better to incorporate people with dementia into a general service and this may offer more scope for enhanced provision, particularly in areas with low population densities. However, the inclusion of people with dementia in a general day care service will require action in relation to staff training, staffing levels and the development of the physical environment. In terms of resource allocation decisions, public resources in Ireland are currently allocated for day care services for people with dementia through block grants to the Alzheimer Societies and through funding for individual centres from the Older Persons Services Budget in each CHO area. In this paper we take a multi-level approach to the resource allocation problem of identifying the parts of the country with the greatest need for day care services for people with dementia, taking account of existing provision, population density, dementia prevalence and accessibility, measured by distance. The GIS methods we employ provide a practical and easily replicated decision making framework for the allocation of regional and subregional budgets, and for informing the locations where new services may be required. For planning at a national and regional level, the main focus of the resource allocation framework we outline is towards balancing the availability of day care services for people with dementia across CHO and CHN areas. For planning at a local level, both population density and accessibility are key considerations. By providing an illustrative example, we have shown how a decision about where to locate increased capacity for day care services for people with dementia could be approached. The population density map shows parts of a CHN area where it may be most beneficial to provide a day care service in terms of potential demand. The accessibility map shows parts of a CHN area that have poor accessibility to dementia day care facilities and for these areas, alternative solutions may also be required. For example, it may be cost effective to use a generic day care centre in that area, to provide for people's needs in their own home, or to put increased resources into other transport options, such as volunteer drivers. In terms of strengths and limitations, this study benefits from the availability of a comprehensive national survey of day care services. However, because of the low rates of dementia diagnoses, and the poor recording of dementia diagnoses, there may be an under-estimation of the numbers of clients with dementia and hence the number of existing day care places for people with dementia. In addition, due to the limitations of the data collection infrastructure in day care centres, a significant level of data cleaning and processing was required to estimate the number of places utilised by people with dementia in each day care centre. These factors may have resulted in over/under estimates of the number of dementia places in some centres. In addition, this study does not examine issues such as utilisation rates and waiting times which are likely to be influenced by a broad range of factors such as the quality of the service and perceptions of day care. Nonetheless, the comprehensiveness of the survey data allowed us to generate a range of data and maps which, when used collectively, show the variation in day service provision across the country. At the very least, the maps can be an important aid to the resource allocation decision-making process. Conclusions This paper provides a GIS based approach that can help inform future resource allocation decisions for day care services for people with dementia at regional, subregional and local levels. The mapping of the availability of day care services for people with dementia shows substantial geographic variation across the country. In addition, there are large parts of the country where day care services are difficult to access. However, many of these areas have a low population density. These maps can be used to assist in targeting investment to areas where need is greatest.
2020-05-27T15:00:23.435Z
2019-12-12T00:00:00.000
{ "year": 2020, "sha1": "b6707a28c6403e29611197de5cc6f7ac3fde5b0f", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-020-05341-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6707a28c6403e29611197de5cc6f7ac3fde5b0f", "s2fieldsofstudy": [ "Geography", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270624072
pes2o/s2orc
v3-fos-license
Hydrophilization and Functionalization of Fullerene C60 with Maleic Acid Copolymers by Forming a Non-Covalent Complex In this study, we report an easy approach for the production of aqueous dispersions of C60 fullerene with good stability. Maleic acid copolymers, poly(styrene-alt-maleic acid) (SM), poly(N-vinyl-2-pyrrolidone-alt-maleic acid) (VM) and poly(ethylene-alt-maleic acid) (EM) were used to stabilize C60 fullerene molecules in an aqueous environment by forming non-covalent complexes. Polymer conjugates were prepared by mixing a solution of fullerene in N-methylpyrrolidone (NMP) with an aqueous solution of the copolymer, followed by exhaustive dialysis against water. The molar ratios of maleic acid residues in the copolymer and C60 were 5/1 for SM and VM and 10/1 for EM. The volume ratio of NMP and water used was 1:1.2–1.6. Water-soluble complexes (composites) dried lyophilically retained solubility in NMP and water but were practically insoluble in non-polar solvents. The optical and physical properties of the preparations were characterized by UV-Vis spectroscopy, FTIR, DLS, TGA and XPS. The average diameter of the composites in water was 120–200 nm, and the ξ-potential ranged from −16 to −20 mV. The bactericidal properties of the obtained nanostructures were studied. Toxic reagents and time-consuming procedures were not used in the preparation of water-soluble C60 nanocomposites stabilized by the proposed copolymers. Introduction Fullerene C 60 , a spherical cage-shaped molecule, is one of the most studied in the family of carbon allotropes.Due to its exceptionally high symmetry, C 60 is highly stable, and its structure is the most thermodynamically efficient.The molecule in the shape of a truncated icosahedron has a diameter of 0.7 nm and contains 12 pentagonal faces and 20 hexagonal ones.The electronic structure of fullerene determines its unique properties: electron acceptor activity, high polarizability, radical scavenging activity and, vice versa, a generation of free radicals under light irradiation.In addition, C 60 has a large surface area with a large number of equivalent reaction sites [1].Pure fullerene (pristine C 60 ) and a number of its derivatives are commercially available products [2].Fullerenes have been used as building blocks for the creation of covalent or non-covalent 2D/3D carbon materials (thirdgeneration solar cells); such materials have also found application in catalysis, spintronic, water treatment, etc. [3][4][5][6].The presence of C 60 molecules leads to an increase in the photoconductivity of conducting polymers and organo-metallic compounds [7].Due to their Polymers 2024, 16, 1736 2 of 15 spherical form and unique electron properties, fullerenes attract researchers in the fields of medicine and biology [8][9][10][11].Nanomaterials based on C 60 have good prospects in cancer therapy [12][13][14].Under the influence of UV radiation, fullerene derivatives (malonates) can cause HeLa cells' death [15].C 60 fullerene compounds can provide radioprotection to radiosensitive mammalian cells [16].A noticeable anti-inflammatory effect of an aqueous dispersion of C 60 fullerene, which also acts as an antioxidant and an anti-aging antioxidant drug, has been shown [17][18][19][20].The antibacterial properties and virucidal activity of fullerene and its derivatives have been described [21][22][23][24][25]. Fullerenes are able to reduce the apoptosis of neurons induced by reactive oxygen species (ROS).It is also assumed that by inhibiting the level of ROS, fullerenes can have an antiallergic effect [26].However, none of the listed applications of aqueous and oil C 60 dispersions are currently available to consumers, except skin creams [27].A number of studies show that aqueous dispersions of pristine C 60 do not possess acute or subacute toxicity [17,[28][29][30]. For the use of C 60 and its derivatives for biomedical purposes, it is preferable to disperse them in aqueous media due to the biocompatibility, non-toxicity and environmental friendliness of this solvent.The main approaches to the production of water-dispersible fullerene can be distinguished: 1. direct dispersion in water and the solvent-exchange method, 3. non-covalent complexation of fullerene with a number of hydrophilic compounds, 4. encapsulation. The most popular chemical modifications of fullerene are reactions of nucleophilic and radical additions, including cycloadditions, hydroxylation (fullerenol), carboxylation and amination (amino acid-containing adducts) [31][32][33].The synthesis of covalent assemblies consisting of inherently chiral open inter-60 fullerenes has been demonstrated [34].Fullerene molecules can also be incorporated into polymeric structures by covalent bonding.These include "pearl necklace" structures, "charm bracelet" structures, organometallic polymers, cross-linked polymers, end-capped polymers, star-shaped polymers and supramolecular polymers.Some of them have been shown to be soluble in water [35].In cases of chemical modification, in order to give fullerene water-solubility, it is often necessary to introduce more than one functional group into its structure.Such fullerene derivatives are difficult to identify and separate because the system often contains a mixture of regioisomers.Such modification can change the chemical and physical properties of C 60 molecules.In addition, methods of chemical C 60 functionalization require harsh conditions, the use of catalysts or strong oxidizing agents, favoring cage opening. The most striking individuality of fullerene molecules is manifested in an unmodified form.The dispersion of the fullerene in water was carried out in a series of experiments.At the same time, the prolonged mixing and sonication of fullerite and water were used [36][37][38].Known methods for the formation of stable aqueous dispersions are based on the transfer of C 60 from an organic solution (benzene, toluene, tetrahydrofuran) into an aqueous phase ("solvent exchange method") using ultrasonic treatment.The organic solvent is gradually displaced by intense ultrasound, which ensures the heating of the system and the evaporation of the solvent [39,40].Such dispersions contain hydrated clusters of C 60 molecules; their size depends on the characteristics of the method.However, an aromatic solvent is not suited for obtaining solutions for medical purposes, and it is also difficult to standardize ultrasonic technology, which depends on the sonication time, volume and geometry of the vessel with the dispersion.It is also difficult to completely remove traces of toluene due to its specific π-stacking interaction with fullerene molecules [41].The above approach is quite labor-and energy-intensive, and the final concentration of fullerene is limited (<10 −5 M).It should be added that the dried C 60 dispersions obtained by the abovementioned methods usually irreversibly lose their solubility in water. An alternative approach for avoiding some of the above-stated problems is the conjugation of fullerene with a number of hydrophilic compounds without forming covalent bonds.The electron structure of fullerene is less affected during such bonding; hence, its Polymers 2024, 16, 1736 3 of 15 physical properties are retained to a significant extent.Some variants are the supramolecular encapsulation of fullerene with γor β-cyclodextrin and dendritic derivatives of cyclotriveratrilene and calixarenes [42][43][44][45].However, the strength of such complexes does not always correspond to the required one.The solubilization of fullerene with proteins is another option [46,47].The encapsulation of fullerene C 60 in lipid micelles has been described by many researchers [48][49][50][51].Although nonionic surfactants such as Triton and Twin provide the good solubilization of C 60 , this method also uses toluene to initially dissolve the fullerene, which then must be removed [49]. The approach of including fullerene in polymers seems promising.Supramolecular chemistry approaches for the preparation of host-guest inclusion complexes with fullerenes have made significant progress over the past decade [3].Polymer-fullerene nanocomposites are new materials with many applications in the biomedical field [8].Fullerenes can be retained in the cavity of host polymer molecules due to van der Waals and hydrophobic interactions and π-π stacking, as well as electrostatic interactions [52].It is known that polymers involving oxygen or (and) nitrogen (electron-donor elements) may form chargetransfer complexes with fullerene [53,54].The inclusion of fullerene in the compositions of hydrophilic polymers can provide the ability for them to be dispersed in water; in addition, the manufacturability of such complexes increases, which facilitates their use in a variety of fields.The dispersibility of C 60 in polar solvents may increase using solubilization with amphiphilic polymers, such as polyethylene glycol [55], poly(styrene-b-dimethylacrylamide) block copolymer [56] or poloxamer [10].Water-soluble polymers of acrylamide and acrylic acid that contain fullerene have been prepared by the low-temperature radiation-induced living polymerization in organic solvents [57].The most well-known are water-soluble fullerene complexes with polyvinylpyrrolidone (PVP) [58][59][60][61][62].During the complexation of fullerene with PVP, the fullerene content in relation to the polymer usually does not exceed 1-1.5% [60]. Maleic acid copolymers have not yet been used for the hydrophilization of fullerene.Meanwhile, such copolymers have a number of positive properties: they are water-soluble and amphiphilic, and many of them are commercially available and nontoxic.The copolymers have a regular structure-polymer units clearly alternate.Previously, such copolymers were used to stabilize silver nanoparticles [63], colloidal nanohybrid structures of silver (gold) and zinc oxide [64,65] In this work, we propose to use maleic acid copolymers for the hydrophilization of fullerene and the introduction of easily modified functional groups into the composition.The complexation of fullerene C 60 to copolymers was carried out using the so-called "dialysis principle" [66]. Instrumentation The pH values were determined using a Fisher Scientific 300 403.1 pH-meter (Waltham, MA, USA).FTIR spectra (KBr) were recorded using a Fourier-spectrometer Magna IR-720 (Nicolet, Parsons, WV, USA).The UV-visible absorption spectra were measured with a UVIKON-922 spectrophotometer (Zeiss Group, Baden-Württemberg, Germany).Registration was carried out without diluting the reaction solution, and a cuvette with a diameter of 0.2 cm was used for spectrophotometric measurements. X-ray photoelectron spectroscopy studies were carried out on an Axis Ultra DLD spectrometer (Kratos Analytical, Manchester, UK) using monochromatic Al Kα (hν = 1486.6eV) radiation.Survey spectra and high-resolution spectra were recorded at pass energies of 160 and 40 eV with steps of 1 eV and 0.1 eV, respectively.Survey and high-resolution spectra were recorded with steps of 1 eV and 0.1 eV, respectively.The sampling area was 300 × 700 µm 2 .Samples were mounted on a titanium holder using a double-sided adhesive tape and studied at room temperature under a vacuum of <10 −8 Torr.The energy scale of the spectrometer was calibrated according to the standard procedure based on the following binding energies: 932.62, 368.21 and 83.96 eV for Cu 2p 3/2 , Ag 3d 5/2 and Au 4f 7/2 , respectively.To eliminate the effect of sample charging, the spectra were recorded using a neutralizer.Surface charging was corrected by referencing to the C-C/C-H peak identified in the C 1sspectra (285.0 eV).The background due to electron inelastic energy losses was subtracted by the Shirley method.Quantification was performed using atomic sensitivity factors included in the software of the spectrometer. The elemental analysis was performed using an Elementar Vario MICRO cube C, H, N-analyzer (DKSH Group, Tokyo, Japan) equipped with a thermal desorption column and a UV-Vis spectrophotometer (Agilent Technologies, Santa Clara, CA, USA). Measurements of ξ-potential and nanoparticle sizes were performed using dynamic light scattering (DLS) on a Zeta Sizer Nano ZS instrument (Malvern Instruments, Worcestershire, UK).The thermal stability of the initial copolymers, the copolymer and C 60 mixture and the composites was studied by thermogravimetric analysis (TGA).TGA measurements were performed by a Derivatograth-C (MOM Szerviz, Budapest, Hungary) on samples of about 15 mg at a heating rate of 10 • C/min in argon. Synthesis C 60 /Copolymer Composites Synthesis of C 60 /VM: Initially, 6.25 mL of solution crystalline C 60 in NMP (0.8 mg/mL) was mixed with 4 mL of an aqueous solution of VM (2 mg/mL, pH 8) using a magnetic stirrer.The molar ratio of the maleic acid residues of copolymer to the mole C 60 was 5/1.After 0.5 h, the resulting solution was subjected to exhaustive dialysis (cutoff 1 kDa) against deionized water (1.5 L, four changes).The dried sample was obtained with subsequent lyophilic drying (−55 • C, 0.05 mbar).The samples C 60 /SM and C 60 /EM were prepared under similar reaction conditions and reagent concentrations; the molar ratios of monomeric units of maleic acid residues of the copolymer to C 60 were ~5/1 and 10/1, respectively. Antimicrobial Activity Tests The method of serial microdilution was used for the determination of the minimum inhibitory concentrations (MIC) of the preparations in relation to the strains of microorganisms in accordance with the standard procedure [68].The initial concentrations of C 60 /VM and C 60 /SM were 0.46 and 0.37 mg/mL, respectively (C 60 /EM-0.50mg/mL).A detailed description of the experiment was given elsewhere [69]. Results To obtain a stable colloidal solution of a fullerene-polymer complex, the choice of a polymer component (matrix) as a stabilizer is very important.To prevent the aggregation of fullerene nanoparticles by reducing their surface energy, the choice of a suitable polymer coating of nanoparticles is very important.The stabilization of fullerene nanoparticles can occur not only due to hydrophobic and/or π-π interactions; spatial and/or Coulomb stabilization are also significant.The selected amphiphilic maleic acid copolymers are suitable according to the above criteria.NMP was selected as an aprotic solvent in which fullerene C 60 (fullerite) and polymers are sufficiently soluble due to its low toxicity (taking into account the medical aspect of this work) [70].In a further description of the processes for obtaining stabilized aqueous dispersions of C 60 and an analysis of their properties, for the purposes of the present article, the term "dissolution" will be used, with the resulting system being a colloidal solution (nanodispersion). Polymer-stabilized fullerene complexes can be obtained in two main ways: the dissolution (co-dissolution) of the fullerene and copolymer in NMP or the addition of an aqueous polymer solution to a fullerene solution in NMP.The second variant is preferable because it takes less time and the final solution contains less NMP.Meanwhile, the co-dissolution of the fullerene and polymer in water at pH 6-8 did not result in a homogeneous solution.We used an approximately fivefold molar excess of the VM and SM copolymers relative to fullerene and a tenfold excess for the EM copolymer (calculated per dimer unit), which corresponded to a 1.6-2.0-foldexcess by weight.The calculated content of fullerene in the samples was 33-38%.The values found according to the elemental analysis, gravimetric data and spectrophotometry of the corresponding solutions (λ = 340 nm, ε = 68,000 dm 3 moL −1 cm −1 ) [71] were 30-33%.The increase in the fullerene content relative to the copolymer resulted in a stronger processes of aggregation in a solution and was accompanied by the appearance of large particles separated into the solid phase.The volume ratio of the NMP solution to water was equal to 1:1.2-1.6.The resulting polymer-fullerene solutions were subjected to exhaustive dialysis (cutoff 1 kDa) against deionized water to remove the organic solvent.Drying the yellow-brown dialysate yielded brown powders (Figure 1) that were soluble in water.According to XPS and elemental analysis, the product contained approximately 1-2% nitrogen, even in the case of composites with nitrogen-free copolymers.This indicated that the products contained solvated NMP, and this was not due to insufficient dialysis, since fullerene forms charge transfer complexes with NMP [38,72,73].According to the elemental analysis data, the conjugates contained approximately 5-10% water.It is known that fullerene is also capable of complexing water [74]. After dialysis, the concentration of the complexes in an aqueous solution was 300-400 µg/mL (fullerene content-50-100 µg/mL).The stability of the aqueous solutions of the composites, according to our estimates, is at least 12 months when stored at 4-25 • C. The fullerene composite containing the most hydrophilic VM copolymer (C 60 /VM) had the best solubility-approximately 500 µg/mL at pH 7.0.The solubilities of C 60 /SM and C 60 /EM were noticeably worse-about 100 and 50 µg/mL, respectively.The solubility of the complexes in NMP was approximately 300-400 µg/mL for C 60 /SM and C 60 /VM and 500 µg/mL for C 60 /EM.Fullerene C 60 is practically impossible to extract from dried composites with chloroform, benzene and toluene, and no extraction of C 60 occurred from water solutions of composites with the use of such solvents.So, these complexes practically do not break down in these solvents.Figure 2 shows the UV-Vis spectra of the initial copolymers and obtained conjugates.Solutions of the copolymers VM, EM and SM are colorless and almost do not absorb at wavelengths greater than 300 nm (Figure 2(1-3)).The absorption spectrum of fullerene C60 in nonpolar hexane (where its aggregation is minimal) is characterized by four maxima at 213, 257, 328 and 406 nm [75].In our case, the spectra of the composites do not have such distinct maxima (Figure 2(4-7)); there are small hills in the region of 260 and 340 nm.This is probably due to the self-association of fullerene molecules, aggregation with copolymers, binding to NMP and water molecules and also some loss of the icosahedral symmetry of C60 molecules [76].After several months of storage, the spectra of the obtained solutions did not change. The hydrodynamic particle size of the diluted C60 samples measured by DLS ranged from 116 to 200 nm, with a polydispersity index (PDI) of 0.20-0.40(Table 1).The distributions of scattered-light intensity over the particle size in the solutions of the hybrid macromolecular structures C60/SM and C60/VM were monomodal. Table 1. The size and ζ-potential of hybrid macromolecular structures measured by the DLS method.The absorption spectrum of fullerene C60 in nonpolar hexane (where its aggregation is minimal) is characterized by four maxima at 213, 257, 328 and 406 nm [75].In our case, the spectra of the composites do not have such distinct maxima (Figure 2(4-7)); there are small hills in the region of 260 and 340 nm.This is probably due to the self-association of fullerene molecules, aggregation with copolymers, binding to NMP and water molecules and also some loss of the icosahedral symmetry of C60 molecules [76].After several months of storage, the spectra of the obtained solutions did not change. The hydrodynamic particle size of the diluted C60 samples measured by DLS ranged from 116 to 200 nm, with a polydispersity index (PDI) of 0.20-0.40(Table 1).The distributions of scattered-light intensity over the particle size in the solutions of the hybrid macromolecular structures C60/SM and C60/VM were monomodal.The absorption spectrum of fullerene C 60 in nonpolar hexane (where its aggregation is minimal) is characterized by four maxima at 213, 257, 328 and 406 nm [75].In our case, the spectra of the composites do not have such distinct maxima (Figure 2(4-7)); there are small hills in the region of 260 and 340 nm.This is probably due to the self-association of fullerene molecules, aggregation with copolymers, binding to NMP and water molecules and also some loss of the icosahedral symmetry of C 60 molecules [76].After several months of storage, the spectra of the obtained solutions did not change. The hydrodynamic particle size of the diluted C 60 samples measured by DLS ranged from 116 to 200 nm, with a polydispersity index (PDI) of 0.20-0.40(Table 1).The distributions of scattered-light intensity over the particle size in the solutions of the hybrid macromolecular structures C 60 /SM and C 60 /VM were monomodal.The size distributions of composite particles are different and are related to the structure of polymer chains.When using a copolymer EM as a stabilizer C 60 , the distribution was bimodal, and the colloidal solution contained micron-sized particles.The presence of fullerene in composites apparently provides the greatest contribution to the aggregation of the resulting hybrid structures.The composite particle size may be related to the homo aggregation of fullerene nC 60 ↔ (C 60 )n [77][78][79].At the same time, it was previously shown that maleic acid copolymers can aggregate in acidic and alkaline media, and the particle size distributions were bimodal, with the fast mode correlated with single chains (2-4 nm) and the slow mode correlated with polymer aggregates (60-120 nm).Meanwhile, EM copolymers aggregate to a noticeably greater extent [80,81].Apparently, the polymer acts as a hydrophilic shell on the surface of fullerene aggregates.The used copolymers involve electron-donor elements, oxygen and nitrogen (in the case of VM), so they may form charge-transfer complexes with fullerene [53].It can be assumed that the SM copolymer is able to bind to fullerene through π-π interactions.Since C 60 , toluene and xylene contain sp 2 -states, C 60 acts as a π-electron acceptor when interacting with them [82].Besides donor-acceptor interactions, Coulomb stabilization plays a role in the stabilization of the colloid solution of complexes.The copolymers of maleic acid contain two carboxyl groups with different pK values in the monomeric unit.We have shown previously [83] that in the pH range of 4-6, one carboxyl group of the maleic acid residue in copolymers was ionized by 60-70%, while the other was practically not ionized; however, at a neutral pH, the ionization of both carboxylic groups will be significant.The ξ-potential measured for all samples was from −16 to −20 mV.The obtained values of the zeta potentials indicate the good stability of the colloidal solution of the composites [84].A negative zeta potential means that the composite surface has a negative charge, which may contribute to the high dispersion stability in the solution due to the inter-particle electrostatic repulsive forces.The FTIR spectra of the fullerene composites and initial copolymers are presented in Figure 3. The characteristic absorption bands of fullerene C 60 at 1429, 1181, 576 and 527 cm −1 are attributed to the C-C vibrational modes of the C 60 molecules [85].Against the background of the absorption bands of the copolymers in the IR spectra of hybrid structures, characteristic absorption bands of fullerene are observed at ν = 527, 576 (577) and 1419 (1421, 1426) cm −1 (Figure 3(a2-c2)).Due to the overlap of the polymer and fullerene bands, Figure 3 highlights the most intense bands of fullerene.This indicates the presence of native fullerene in the C 60 -copolymer conjugates.The noticeable change in the FTIR spectra between the composites and copolymers is due to the position of the C=O bond of the carboxyl group.For example, in the VM spectrum (Figure 3(a1)), C=O stretching vibration at 1719 cm −1 corresponds to non-ionized carboxyl groups of maleic acid residues.In the FTIR spectrum of C 60 /VM (Figure 3(a2)), the value C=O stretching vibration is shifted to the region at 1575-1576 cm −1 , which corresponds to the ionized form of the carboxyl group of the maleic acid residues of the copolymers [65].For the composites C 60 /EM and C 60 /SM, this band is in the range of 1561-1569 cm −1 .The sample SM (Figure 3(c1)) also contained partially non-hydrolyzed anhydride groups (bonds 1779, 1860 cm −1 ), which, in the composite, were also converted into ionized carboxyl groups. all samples was from -16 to -20 mV.The obtained values of the zeta potentials indicate the good stability of the colloidal solution of the composites [84].A negative zeta potential means that the composite surface has a negative charge, which may contribute to the high dispersion stability in the solution due to the inter-particle electrostatic repulsive forces.The FTIR spectra of the fullerene composites and initial copolymers are presented in The characteristic absorption bands of fullerene C60 at 1429, 1181, 576 and 527 cm −1 are attributed to the C-C vibrational modes of the C60 molecules [85].Against the background of the absorption bands of the copolymers in the IR spectra of hybrid structures, characteristic absorption bands of fullerene are observed at ν = 527, 576 (577) and 1419 (1421, 1426) cm −1 (Figure 3(a2-c2)).Due to the overlap of the polymer and fullerene bands, Figure 3 highlights the most intense bands of fullerene.This indicates the presence of na- A comparative analysis of the TGA curves of hybrid structures C 60 /copolymer, C60 , copolymers and a mechanical mixture of copolymers with fullerene can serve as proof of the inclusion of fullerene in the structure of the polymer composite (Figure 4).A comparative analysis of the TGA curves of hybrid structures C60/copolymer, C60, copolymers and a mechanical mixture of copolymers with fullerene can serve as proof of the inclusion of fullerene in the structure of the polymer composite (Figure 4).The native C60 was stable up to 800 °C, with almost no loss of mass (Figure 4a-c, curves 4).The initial copolymers decompose mainly at about 400 °C (Figure 4a-c, curves 1).The TGA curves of mechanical mixtures (~30 wt.% fullerene) practically repeat the curves of the individual components contained in the mixture.The decomposition degree of the fullerene-containing samples occurred at about 100 °C lower than that of individual C60.The TGA curves of C60-copolymer hybrid structures (Figure 4a-c, curves 3) differ from those of mixtures of system components in a smoother course.At low temperatures, the samples lost mass, probably as a result of the cleavage of solvent molecules.Table 2 shows comparative data on the decomposition of the studied samples.a The average values according to the methods of gravimetric and elemental analysis and spectrophotometry of the corresponding solutions (λ = 340 nm, ε = 68,000 dm 3 •mol −1 •cm −1 ) [71]; b at 600 °C. We noted that, for composites, the last stage of destruction lasts longer and above 700 °C, in contrast to the mechanical mixtures and initial copolymers (Figure 4, Table 2).Previously, a similar effect was observed for C60/epoxy polymer nanocomposites [86].For composite samples, mainly, the found values of the residual weight at 600 °C exceeded those for the calculated values (Table 2).We conducted a control experiment in which fullerene was subjected to the same procedure that was used in the preparation of conjugates: the C60 solution in NMP was subjected to dialysis against water, followed by lyophilic drying.The resulting water-insoluble red-brown powder was also studied by the TGA method (Figure 5, curve 1).The native C 60 was stable up to 800 • C, with almost no loss of mass (Figure 4a-c, curves 4).The initial copolymers decompose mainly at about 400 • C (Figure 4a-c, curves 1).The TGA curves of mechanical mixtures (~30 wt.% fullerene) practically repeat the curves of the individual components contained in the mixture.The decomposition degree of the fullerene-containing samples occurred at about 100 • C lower than that of individual C 60 .The TGA curves of C 60 -copolymer hybrid structures (Figure 4a-c, curves 3) differ from those of mixtures of system components in a smoother course.At low temperatures, the samples lost mass, probably as a result of the cleavage of solvent molecules.Table 2 shows comparative data on the decomposition of the studied samples.We noted that, for composites, the last stage of destruction lasts longer and above 700 • C, in contrast to the mechanical mixtures and initial copolymers (Figure 4, Table 2).Previously, a similar effect was observed for C 60 /epoxy polymer nanocomposites [86].For composite samples, mainly, the found values of the residual weight at 600 • C exceeded those for the calculated values (Table 2).We conducted a control experiment in which fullerene was subjected to the same procedure that was used in the preparation of conjugates: the C 60 solution in NMP was subjected to dialysis against water, followed by lyophilic drying.The resulting water-insoluble red-brown powder was also studied by the TGA method (Figure 5, curve 1).The TGA curve for this compound differs from that of the original C60, and its tendency toward decomposition is similar to that of C60 complexes with copolymers.In this case, the last stage of destruction of this sample also continues at temperatures above 700 °C; the decomposition stage of fullerene at this temperature is practically degenerate.In contrast to the C60 fullerene, at temperatures above 100 °C, a loss of mass is observed, probably due to the removal of bound water and NMP.Judging by the TGA curves, the composites do not include individual fullerene formations, and obviously, the fullerene component is associated with NMP molecules, which was also confirmed by DLS, elemental analysis and XPS data. Figure 6 shows the high-resolution C 1s photoelectron spectra of the prepared composites.The spectra in Figure 6a are normalized by the intensity of the main peak.Figure 6b-d show the spectra of the samples C60/SM, C60/EM and C60/VM fitted with several Gaussian profiles in accordance with their chemical formulas and reliable chemical shifts, respectively [87].The TGA curve for this compound differs from that of the original C 60 , and its tendency toward decomposition is similar to that of C 60 complexes with copolymers.In this case, the last stage of destruction of this sample also continues at temperatures above 700 • C; the decomposition stage of fullerene at this temperature is practically degenerate.In contrast to the C 60 fullerene, at temperatures above 100 • C, a loss of mass is observed, probably due to the removal of bound water and NMP.Judging by the TGA curves, the composites do not include individual fullerene formations, and obviously, the fullerene component is associated with NMP molecules, which was also confirmed by DLS, elemental analysis and XPS data. Figure 6 shows the high-resolution C 1s photoelectron spectra of the prepared composites.The spectra in Figure 6a are normalized by the intensity of the main peak.Figure 6b-d show the spectra of the samples C 60 /SM, C 60 /EM and C 60 /VM fitted with several Gaussian profiles in accordance with their chemical formulas and reliable chemical shifts, respectively [87].The TGA curve for this compound differs from that of the original C60, and its tendency toward decomposition is similar to that of C60 complexes with copolymers.In this case, the last stage of destruction of this sample also continues at temperatures above 700 °C; the decomposition stage of fullerene at this temperature is practically degenerate.In contrast to the C60 fullerene, at temperatures above 100 °C, a loss of mass is observed, probably due to the removal of bound water and NMP.Judging by the TGA curves, the composites do not include individual fullerene formations, and obviously, the fullerene component is associated with NMP molecules, which was also confirmed by DLS, elemental analysis and XPS data. Figure 6 shows the high-resolution C 1s photoelectron spectra of the prepared composites.The spectra in Figure 6a are normalized by the intensity of the main peak.Figure 6b-d show the spectra of the samples C60/SM, C60/EM and C60/VM fitted with several Gaussian profiles in accordance with their chemical formulas and reliable chemical shifts, respectively [87].The characteristics of the peaks are presented in Table 3.Based on the quantitative analysis data, the binding energies and Figure 6 The relative concentrations of C 60 decrease in the sequence C 60 /EM, C 60 /VM and C 60 /SM (Table 3).The -C=C-groups correspond to the benzene ring in the C 60 /SM sample.CH 2 groups are present in all carbochain copolymers, as well as carboxyl groups of maleic acid residues -C(O)O (Table 3).Thus, an analysis of XPS spectra confirms the presence of fullerene in the resulting conjugates, as well as the composition of the samples. The resulting composites were tested for antibacterial activity against conditionally pathogenic microorganisms by the serial microdilution method.We use E. coli-Gramnegative, facultative anaerobic bacteria; S. aureus MRSA-Gram-positive bacteria, resistant to oxacillin; C. albicans-yeast-like fungi; P. aeruginosa-aerobic non-fermenting Gramnegative bacteria and E. faecalis-Gram-positive, commensal bacterium as test microorganisms.At a sample concentration of 300-400 µg/mL, the bactericidal activity of the studied conjugates was not detected against all the strains of the microorganisms used.When using an increased concentration of C 60 /VM up to 400 µg/mL, no activity against C. albicans was also observed.It may be noted that Japanese researchers showed carboxyfullerene (six carboxyl groups attached to fullerene C 60 ) to be effective in the treatment of both Grampositive and Gram-negative infections in in vitro experiments in doses of 5-50 µg/mL [21].At the same time, C 60 fullerene in the form of a complex with polyvinylpyrrolidone does not affect the microflora as well as any of the tested microorganisms, including Gram-positive cocci, streptococci, enterococci and Gram-negative bacteria [22].Although the resulting stabilized aqueous C 60 dispersions did not exhibit antibacterial activity, their subsequent modification makes it possible to obtain a sample with a pronounced antimicrobial effect.When silver nanoparticles in complex with EM (EM/Ag 0 ) were introduced into the composite with fullerene, the MIC value in relation to S. aureus MSSA was 19.5 µg/mL or 6.7 µg/mL in terms of the silver content in the sample.For EM/Ag 0 , the MIC value is 54.5 µg/mL or 23.4 µg/mL in terms of the silver content in this sample.Thus, the resulting aqueous C 60 dispersions, on the one hand, are promising as potential non-toxic carriers of pharmacologically active substances, and on the other hand, they can be modified to impart antibacterial activity. Conclusions We have demonstrated the production of stable water-soluble composites of C 60 containing significant amounts of fullerene.Available ionogenic amphiphilic carbon chain copolymers of maleic acid of a regular structure were used for stabilization and hydrophilization agents in such systems.The protocol for the preparation of C 60 conjugates eliminates the use of toxic organic solvents and time-consuming methods for their preparation, such as heating and prolonged stirring, ultrasonic treatment, etc.Such transformation is achieved under mild conditions when using the so-called "dialysis method" that is easily implemented.In a scalable version, a similar process, tangential ultrafiltration, can also be used.In the resulting nanostructures, guest molecules of C 60 are confined by a polymer shell.The polymer shell provides the stabilization and solubility of encapsulated C 60 in water through non-valent interactions-hydrophobic, steric and electrostatic.The resulting composites contained about 30% fullerene.The composite containing a more hydrophilic copolymer-poly(N-vinyl-pyrrolidone-alt-maleic acid) had better solubility in water.Although the composites had no activity as antibacterial agents, their fortification by introducing active bactericides, such as silver, can impart antimicrobial properties.The proposed approach may be used to obtain composites containing modified fullerene-for example, its amino acid adducts and other derivatives, which exhibit biological activity but poor solubility.There is the potential for using the obtained fullerene conjugates in other fields.The antioxidant properties of fullerene as part of a complex with non-toxic copolymers can be manifested in the creation of external products and can be used in cosmetology and dermatology [88]. Table 1 . The size and ζ-potential of hybrid macromolecular structures measured by the DLS method. Table 1 . The size and ζ-potential of hybrid macromolecular structures measured by the DLS method. Table 2 . Characteristics of the samples and TGA analysis data. Table 2 . Characteristics of the samples and TGA analysis data.The average values according to the methods of gravimetric and elemental analysis and spectrophotometry of the corresponding solutions (λ = 340 nm, ε = 68,000 dm 3 •mol −1 •cm −1 ) [71]; b at 600 • C. a , it can be concluded that NMP impurity is present in all samples, mainly in the C 60 /EM sample.The surface elemental compositions of the samples C 60 /SM, C 60 /EM and C 60 /VM calculated from high-resolution XPS spectra are C 80.4 O 18.2 Na 0.5 N 0.9 , C 76.6 O 21.4 Na 0.7 N 1.3 and C 82.4 O 15.2 Na 0.3 N 2.1 , respectively. Table 3 . Parameters of components in the C 1s photoelectron spectra of the studied samples: E b -binding energy, W-Gaussian peak width and I rel -relative intensity. * Carbon atoms bonded to C(O) group.
2024-06-21T15:11:32.469Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "2380c444cb752aeaf5e504ab3391d5e46e36913d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/16/12/1736/pdf?version=1718789248", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73520c84fafca12849ece8e0130f08d0b4207fa0", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
97499895
pes2o/s2orc
v3-fos-license
Ultrafast studies on the photophysics of matrix-isolated radical cations of polycyclic aromatic hydrocarbons: implications for the Diffuse Interstellar Bands (DIB) problem Rapid, efficient deactivation of the photoexcited PAH cations accounts for their remarkable photostability and have important implications for astrochemistry, as these cations are the leading candidates for the species responsible for the diffuse interstellar bands (DIB) observed throughout the Galaxy.Ultrafast relaxation dynamics for photoexcited PAH cations isolated in boric acid glass have been studied using femtosecond and picosecond transient grating spectroscopy. With the exception of perylene+, the recovery kinetics for the ground doublet (D0) states of these radical cations are biexponential, containing a fast (<200 fs) and a slow (3-20 ps) components. No temperature dependence or isotope effect was observed for the fast component, whereas the slow component exhibits both the H/D isotope effect (1.1-1.3) and strong temperature dependence (15 to 300 K). We suggest that the fast component is due to internal Dn to D0 conversion and the slow component is due to vibrational energy transfer (VET) from a hot D0 state to the glass matrix. Introduction Processes involving radical cations of polycyclic aromatic hydrocarbons (PAHs) occur in many areas of chemistry, such as organic synthesis, photo-and radiation chemistry, and astrochemistry. In particular, these radical cations are thought to be responsible for diffuse interstellar bands (DIBs) in star-forming clouds in our Galaxy. [1][2][3][4] While the association of these ubiquitous bands with the PAH cations is still tentative, 1 these radical cations, together with their parent molecules and corresponding carbonium ions, could be the prevalent form of the organic matter in the Universe. 4 Why are these exotic -by terrestrial standards -species so common? Some researchers speculate that the PAHs are generated in mass-losing carbon stars, in ion-molecule reactions of neutral and ionized C atoms, by condensation of C chains, or by thermo-and photo-induced condensation of molecules adsorbed on dust particles (see, for example, ref. 4). In these scenarios, the high abundance of the PAH cations is due to high production rate of their parent molecules. Other researchers focus 1,3,5 on the dynamics of the gas-phase carbonaceous molecules and ions in the interstellar medium where these species are exposed to intense radiation from young stars. It seems that PAH cations of intermediate size are favored in this harsh environment. 3,5 Regardless of the exact answer, it is certain that (i) PAH cations are products of lengthy chemical evolution driven by heat and radiation, and (ii) these cations are abundant in space because they are more photostable than most neutral molecules and anions. [1][2][3][4] In particular, the primary decay processes for photoexcited radical cations of PAHs are nonradiative: [5][6][7][8][9][10] almost all of the excitation energy is converted into heat that (in the interstellar medium) is emitted as IR radiation. 1,2 It is the purpose of this work to study the photophysical processes of PAHs. While the photophysics and reactivity of the excited singlet and triplet states of neutral PAH molecules is well understood, very little is known about the photophysics of their radical cations. In the gas phase, -H, -H 2 , and -C 2 H 2 photofragmentation of C 10 -C 16 PAH cations have been studied by mass-spectrometry. 5 Typical dissociation rates for 7 eV excess energy are (1-3)x10 3 s -1 , and the onsets of dissociation are 4-4.5 eV (which is close to the C-H bond dissociation energy). 5 These gas-phase studies demonstrate remarkable photostability of PAH cations (as compared to their parent molecules) and suggest fast energy relaxation in these species. There have also been numerous EPR, UV-VIS, and IR studies of matrix-isolated PAH cations. From these studies and concurrent ab initio and density functional theory calculations, a wealth of data on the structure and energetics of aromatic radical cations has emerged (e.g., see Table 1 and references given therein). Recently, Vauthey and coworkers 7,8 examined the relaxation dynamics of photoexcited matrix-isolated radical cations of several organic molecules, including perylene + and tetracene + . The 640 nm (D 0 ← D 1 ) band in the fluorescence spectrum of perylene + was observed, and a quantum yield of 10 -6 obtained. Picosecond (for perylene + ) and subpicosecond (for tetracene + ) transient grating (TG) spectroscopy was used to observe the recovery of the D 0 state following laser photoexcitation (in the D 0 → D 5 and D 0 → D 1 bands, respectively). This study brought an unexpected result: although the D 1 → D 0 conversion in tetracene + and perylene + was fast (25 to 100 ps, depending on the matrix) 7,8 as compared to the typical times of 10 -9 -10 -6 s for S 1 → S 0 transition in neutral PAH molecules, 9 this conversion was much slower than the typical times of 10-5. 500 fs for nonradiative S n → S n-1 transitions that involve higher excited singlet states (S n ) of these PAH molecules. 9,10 Since the rate of nonradiative transition broadly correlates with the energy gap between the initial and final states, 10 these energetic S n states should have provided a good reference system for the lower doublet states of aromatic radical cations, as the corresponding energy gaps are comparable (0.5-1 eV; see Table 1). Since tetracene + and perylene + have unusually large D 1 -D 0 gaps of 1.44 and 1.56 eV, 8 respectively, it appears that these two radical cations might represent the exception rather than the rule. To observe nonradiative transitions in a typical PAH cation, time resolution better than 1 ps is needed. In this work, photophysics of radical cations of perprotio and perdeuterio anthracene, naphthalene, biphenyl, and perylene stabilized in boric acid glass are studied using femtosecond transient grating spectroscopy. Our results suggest that for most of the PAH cations, the excited electronic states relax through a nonradiative D n → D 0 transition that occurs in less than 200 fs. This process rapidly converts the electronic energy into vibrational energy of the ground D 0 state which then undergoes vibrational energy transfer (VET), on a picosecond time scale, by heat transfer to the matrix. The typical VET times observed in the boric acid glass are 5 to 20 ps. Perylene + has an exceptionally long lifetime for the D 1 → D 0 conversion, ca. 19 ps, by virtue of its unusual energetics. To save space, some data are given in the Supporting Information. Figures and tables with a designator "S" after the number (e.g., Fig. 1S) are placed therein. 6. Experimental Sample Preparation. Orthoboric acid (H 3 BO 3 ), biphenyl-h 10 and -d 10 , naphthalene-h 8 and -d 8 , anthracene -h 10 and -d 10 , pyrene-h 10 , and perylene-h 12 (see Fig. 1S for the structure) of the highest purity available from Sigma-Aldrich were used as received. PAH-doped glass was prepared by adding crystalline PAHs to the boric acid melt at 200-240 o C. The melt was cast between two thin windows made of fused silica, calcium fluoride, or sapphire and produced high quality, optically clear glasses upon cooling. Typical glass film thickness was 100 to 400 µm. These samples were then exposed to 5 to 50 pulses of 248 nm light (15 ns fwhm, 0.05 J/cm 2 ) from a Lambda Physik model LPx-120i KrF excimer laser, at room temperature. Absorption spectra 11,12 confirmed the formation of radical cations 11 with conversion efficiency better than 80%. The concentrations of the radical cations were spectrophotometrically determined to be 0.1-0.5 mM (for perylene + , ca. 20 µM). Because of a large nonresonant TG signal observed in the windows, the glass film was removed from the substrate and transparent 2 mm x 2 mm pieces were mounted on the cold finger of a closed cycle helium refrigerator (Lake Shore Cryogenics) operable from 10 to 300 K. 13 Typical absorbance of these films at the excitation wavelength was 0.3-1. For absorbances < 0.1, the nonresonant TG signal from the glass matrix was superimposed on the TG signal from PAH cations. Transient grating spectroscopy. The ground state recovery dynamics of PAH cations were studied using femtosecond TG spectroscopy. Details of this technique can be found elsewhere. 7,8,14 Briefly, a standard three beam transient grating setup using a folded BOXCARS geometry with crossing angles of ca. 2 o was used. 7,8,14 Calcite Glan-Thomson 7. polarizers were placed in each of the three beam paths and the scattered probe beam to ensure pure and parallel polarization so that the observed signal reflected the dynamics of χ 1111 (3) element of the third-order nonlinear susceptibility tensor. 7,14 Under the conditions of our experiments a density grating (due to heat-induced expansion of the sample) is produced simultaneously to the population grating that we are interested in observing. The density grating generates an acoustic standing wave that over time will affect the population grating. The density grating has no effect on our measurements because of the small angle geometry used in this set-up (~2 o ) results in an acoustic period of >10 ns. The instrument response was determined by obtaining a nonresonant TG signal due to optical field induced electronic and nuclear Kerr effect in a thin plate of fused silica or undoped boric acid glass, before and after each measurement. This response was Gaussian, with a typical fwhm of 60-70 fs (for femtosecond kinetics) or 160 fs (for picosecond kinetics). The tunable source of femtosecond light pulses used in the experiments is an optical parametric amplifier (Spectra Physics OPA model 8000CF). The OPA is pumped by a 800 µJ, 800 nm, 55 fs fwhm pulse split off from a two-stage Ti:Sapphire based chirped pulse amplifier system (3.5 mJ) operated at a repetition rate of 1 kHz. Passing the output of the OPA through a pair of SF-10 prisms resulted in transform limited pulses of 45-55 fs duration (depending on the wavelength). The pump energy (in each pump beam) was less than 500 nJ and the probe energy was less than 50 nJ. To subtract the background signal, one of the pump beams was chopped at 500 Hz, and the photodiode signal fed into a digital lock-in amplifier (SRS model 810). Otherwise, the detection electronics were identical to those described previously. 15 To obtain 8. subpicosecond kinetics, 50-100 traces acquired with a time constant of 30 ms were averaged. The delay times of the probe pulses were changed either linearly (∆t=15 fs) or on a quasilogarithmic grid (Fig. 1). 15 The error bars shown in some kinetics are 95% confidence limits. To obtain picosecond kinetics, the delay time of a 160 fs pulse was changed in steps of 150-200 fs and 20-30 traces acquired with a time constant of 300 ms were averaged. Table 1). Following the excitation pulse, the recovery of the photobleached D 0 state is observed. The intensity of the grating signal is proportional to the square of the concentration of photoexcited cations. For an exponential process, the TG signal decays at twice the rate of the D 0 state recovery. Note that only population grating 7,14 contributes to the TG signal observed at the end of the pump pulse. During photoexcitation, a nonresonant signal due to the optical Kerr effect ( Fig. 1) 8,14 contributes to the TG signal. This signal makes it difficult to observe rapid processes that occur in less than 50 fs. Results Following the 680 nm photoexcitation of naphthalene + , there is a short-lived TG signal whose single-exponential decay kinetics corresponds to a lifetime τ f of 195±5 fs 9. (Figs. 1 and 3S(a)). After the decay of this short-lived signal, there is a slower TG signal that decays over the time period of tens of picoseconds (Figs. 1, 2(b), and 3S(b)). This slow signal comprises ca. 10% of the initial "spike" observed at short delay times (the apparent relative weight of the slow component increases for longer excitation pulses). Using a longer (160 fs) pulse, we were able to obtain a better quality decay kinetics for t > 1 ps that are shown separately in Fig. 2 (Table 2). Cooling the sample also changes this slow component (Fig. 2). For naphthalene-d 8 + , τ s becomes progressively longer as the temperature decreases (Table 2) and the relative weight of the slow component increases by 50% upon cooling from 300 to 70 K ( Fig. 2(b)). Unlike the fast component, the slow decay kinetics are sensitive both to the sample temperature and H/D substitution in the aromatic radical cation. Discussion In our experiments, two relaxation regimes for the recovery kinetics of the D 0 state are observed: a subpicosecond regime and a picosecond regime. Given that these cations are photoexcited to their D 2 , D 3 , or even D 5 states, it is tempting to associate the fast For aromatic singlets, the slowest (usually, radiative) transition is from their first excited S 1 states to their ground S 0 states (Kasha's rule) whereas less energetic S S n n -1 → transitions are nonradiative and occur much faster (< 10 ps). 9,10 Typically, the energy gaps between the S n and S n-1 states are 2,000 to 6,000 cm -1 (vs. 10,000-25,000 cm -1 for the S 1 and S 0 states) and the transition times are 10-20 fs to 1-5 ps (vs. 1 ns to 1 µs for the S 1 and S 0 states). 9 For the doublet manifold of PAH cations, the energy gaps between the D 0 and D 1 states are 7,000 to 10,000 cm -1 (see Table 1) and one would expect to observe rapid nonradiative transitions in the doublet manifold. → transitions), the law was shown to have limited applicability. 9 13. Cursory examination of Tables 1 and 2 state, the spectral line is broader than the same line in a thermalized cation. 20a For a given probe wavelength, the difference between the absorbances of the hot and relaxed D 0 states strongly depends on the overlap between the spectra of these two states: the greater is the overlap, the smaller is the slow TG signal. At the lower temperature, the line is narrower, the spectral overlap becomes worse, and the weight of the slow TG signal increases. In perylene + , due to an exceptionally large D 0 -D 1 gap, the internal conversion is inhibited, and the 18.8 ps decay kinetics reflects the slow internal conversion of the D 1 14. state to the ground D 0 state. Since for perylene + (and, possibly, tetracene + ) 8 the slow component is due to electronic rather than vibrational relaxation, the contrast between temperature dependencies shown in Fig. 2 The two-step mechanism qualitatively accounts for the isotope and temperature 17. The typical quantum yields for these two reactions are given in Table 1S in the Supporting Information. Since the heats of reactions (1) and (2) depend on the difference between the ionization potentials (IP's) of the parent aromatic molecule and the solvent, these quantum yields correlate with the molecular IP (Table 1S and refs. 27 and 28). Interestingly, some PAH cations, such as anthracene + and biphenylene + , exhibit very low yields of hole injection, though the corresponding IP's are sufficiently high and the absorbance of the cation at the excitation wavelength is strong. 28 Conversely, photoexcited perylene + oxidizes both polar and nonpolar solvents, though the corresponding reactions are barely exothermic. 27,28 Our kinetic data rationalize these observations. Photoexcited anthracene + is extremely short-lived, and the hole injection involving this species is inefficient. Photoexcited perylene + is very long-lived, and even slow, inefficient hole injection can occur. From the data of Tables 1S and 2, we estimate that for naphthalene + and biphenyl + in 2-propanol, reaction (2) occurs with rate constant of (3-5)x10 12 s -1 , whereas for perylene, this rate constant is just 10 8 s -1 . Note that if the lifetimes of the electronically-excited radical cations involved in the "hole injection" were given by τ s (as in the first model considered above) rather than by τ f (as in the second model) the anomalous behavior of anthracene + would be difficult to account for, as this cation has longer τ s than naphthalene + and biphenyl + ( Transient grating signal for the recovery of the ground D 0 state of the radical cation observed in a 680 nm pump -680 nm probe (D 0 → D 2 band, 0-0 transition) photoexcitation of naphthalene-h 8 + (trace (i), filled circles) and naphthalene-d 8 + (trace (ii), crosses) in a boric acid glass at 295 K. The optical density of this 300 µm thick glass sample was ca. 1. Dashed line (trace (iii), filled diamonds) is a nonresonant optical Kerr effect signal from a thin Suprasil plate, which is taken as an auto correlation trace for the excitation/probe pulse (in this case, a Gaussian pulse of 63 fs fwhm). A solid line (trace (i)) is the least-squares fit obtained by convoluting trace (ii) with a biexponential function (the time constants are given in Table 2 . The lines drawn through the symbols are the least-squares exponential fits. The TG kinetics were normalized at the signal maximum (taken as unity). The initial "spike" (see Fig. 1, traces (i) and (ii)) is not shown. As the temperature decreases, the relative weight of the slow component increases and its decay kinetics become slower. time and gives raise to the observed slow kinetics (a "tail" in Fig. 1, trace (i)). These slow kinetics (unlike the rapid internal conversion) are temperature and isotope dependent. The heat transferred to the first solvation shell around the radical cation then slowly dissipates to the matrix bulk. The resulting density grating is not observed (< 1 ns), due to polarization geometry in our experiment. In this section, we examine a scenario in which the fast and the slow decay components in the recovery kinetics of the D 0 state are attributed to nonradiative D n → D 1 and D 1 → D 0 transitions, respectively. Since, for obvious reasons, the fast D n → D 1 component cannot be observed by following the repopulation kinetics of the D 0 state, for this scheme to work one needs to postulate the involvement of a light-absorbing D 1 state. The first excited doublet state of naphthalene + (D 1 ) has ca. 0.89 eV higher energy than the ground D 0 state and radiative D 0 → D 1 transition is symmetry forbidden (the symmetry representations and energetics for D 2h symmetric PAH cations are given in Table 1 and Fig. 2S). The D 2 state (the final state upon 1.82 eV excitation of the ground state cation) is ca. 0.92 eV more energetic than the D 1 state. The first allowed optical transition of the D 2 state is to the D 5 state (which is 1.94 eV higher in energy), so it cannot contribute to the TG signal obtained using 1.82 eV probe light. By contrast, a D 1 → D 3 transition is both energetically and symmetry allowed. A similar situation occurs for anthracene + and pyrene + excited to their D 2 states (Fig. 2S) and biphenyl + excited to the D 3 state (Fig. 8S). Note that for all of these radical cations the transition dipole moments of the D 0 → D 1 and D 1 → D n transitions are perpendicular ( Fig. 2S and 8S). Since the pump and probe pulses have the same polarization, on average only 1/3 of the generated D 1 state cations can be observed in our experiment (assuming no reorientation of the cations in the glass matrix). 1S Though more than one species can contribute to the transient grating signal, provided that the observed fast kinetics are from a rapid, nonradiative D 1 → D 0 transition (as argued below), the TG signal would reflect the recovery dynamics of the D 0 state even if both the D 0 state and the D 1 state absorb the probe light. If the slow component is due to D 1 → D 0 transition, one may expect that the corresponding rate constants would follow the energy gap law (see the Discussion). In accord with these expectations, the τ s times for the picosecond component at 295 K systematically increase with the D 1 -D 0 energy gap, from biphenyl + to perylene + (though this correlation vanishes when low-temperature lifetimes are compared). For pyrene + and anthracene + (D 0 → D 2 excitation), the D 2 -D 1 and D 1 -D 0 gaps are 3,000 and 10,700 cm -1 and 5,000 and 9,400 cm -1 , respectively. 16 Conceivably, the rapid relaxation observed for these two cations (τ f < 50 fs) could be from a D 2 → D 1 transition and the picosecond component from a slower D 1 → D 0 transition. For anthracene + and naphthalene + in solid Ne and Ar, 16 the line widths of the 0-0 lines are 120 and 50 cm -1 , respectively (the D 0 → D 2 transition). If these line widths are due to lifetime broadening, one obtains estimates of 44 and 106 fs, respectively. These lifetimes compare favorably with the experimental lifetimes for the fast component. For biphenyl + (D 0 → D 3 excitation), 17 there are two nearly degenerate states, D 1 and D 2 , midway between the D 0 and D 3 states (Fig. 8S and the caption to this figure). The D 3 -D 1,2 gap is 6,300 cm -1 , which is larger than the D 2 -D 1 gap in pyrene + and anthracene + and smaller than the D 2 -D 1 gap in naphthalene + (7,400 cm -1 ). 16,17 Following this trend, the fast component for biphenyl + decays slower than the fast component for anthracene + and faster than the fast component for naphthalene + . For perylene + (D 0 → D 5 excitation) 16 and tetracene + (D 0 → D 1 excitation), 18 the D 0 -D 1 gaps are large (> 11,000 cm -1 ) which accounts for their slow decay kinetics. The lack of a fast component for perylene + can be accounted for by high density of the D n states (the largest gap of 4600 cm -1 is for the D 5 → D 4 transition), 16 i.e., by extremely rapid D n → D 1 relaxation. Despite some correlation between the τ f and τ s times and the corresponding energy gaps, the scenario in which the fast/slow components arise from the D n → D 1 and D 1 → D 0 transitions, respectively, is not supported by our kinetic data: First, the energetics argument is not consistent. For naphthalene + , the D 2 -D 1 and D 1 -D 0 gaps are comparable, yet one component is 40 times slower than another ( Fig. 1 and Table 2). A similar discrepancy is observed for biphenyl + , for which the D 3 -D 1,2 and D 1,2 -D 0 energy gaps are also comparable (Fig. 8S). It is known that the H/D isotope effect on the rate of nonradiative transitions (e.g., T 1 → S 0 transitions) inversely correlates with the energy gap, as only high-frequency C-H stretching modes contribute to this effect. 10 For energy gaps smaller than 15,000 cm -1 , the isotope effect is usually quite small. 10 For naphthalene + , anthracene + , and biphenyl + , the D 0 -D 1 gaps are smaller than 10,000 cm -1 , yet the corresponding τ s times change considerably with H/D substitution. If the picosecond components were always due to D 1 → D 0 transitions, why would the time constants τ s of these transitions vary with temperature and isotope composition for some PAH cations (such as naphthalene, biphenyl, and anthracene) but not others (such as perylene) and why would the fast components not exhibit such variations? Second, simple kinetic considerations suggest that if the D 1 state contributed to the TG kinetics, larger variations in the time profiles with respect to the absorption properties of the D 0 and D 1 states should be observed. Let K=τ f /τ s be the ratio of the fast (D n → D 1 ) and slow (D 1 → D 0 ) relaxation times and ρ=ε(D 1 )/3ε(D 0 ) be the ratio of molar extinction coefficients ε of the D 1 and D 0 states corrected by the polarization factor of 1/3. Then, the transient grating signal is proportional to convoluted with the time profile of the excitation pulse. Since K<<1, eq. (1S) gives essentially bimodal kinetics with a fast and slow component. Using eq. (1S), it is possible to fit the observed TG kinetics for naphthalene + and biphenyl + shown in Figs. 1, 3S(a), and 4S(b), with τ f of 140±7 fs and 76±7 fs, respectively (and the same τ s values as given in Table 2) and ρ of 0.79 and 0.76, respectively. Parameter ρ being close to unity helps to explain the kinetic data: for K<<1, the relative weight of the slow component is given by (1-ρ) 2 . In order to obtain the observed range of 2-10% for these weights, ρ should always be 0.7-to-0.9. This does not seem plausible: there is no reason to expect that ε(D 1 ) is always approximately three times greater than ε(D 0 ). Moreover, as Third, if the slow component was due to the D 1 → D 0 conversion, it is surprising that no vibrational relaxation of the resulting "hot" D 0 state is observed afterwards. [20][21][22] The excess energy of a "hot" D 0 state is greater than 10,000 cm -1 . In such a case, the line shapes of the D 0 → D n band for the "hot" and thermalized D 0 states have to be different. 19 Specifically, the spectrum of the "hot" state should be broader 19a and, possibly, redshifted. 21 Given that the excitation bandwidth of the probe pulse, ca. 220 cm -1 , is less than the homogeneous line width of a PAH cation trapped in boric acid glass, 300-500 cm -1 , 11,18b even a small difference between the spectral lines for the "hot" and the relaxed D 0 states should yield a measurable TG signal. There should be a TG component that corresponds to the vibrational relaxation of the "hot" D 0 state. We argue that the slow component corresponds to this vibrational relaxation dynamics. (2S.) Appendix 2: intermolecular VET in boric acid glass. Below, we speculate on the possible mechanisms for isotope-and temperaturesensitive vibrational energy transfer from the "hot" D state of a planar PAH cation to the glass matrix which are postulated in the "two-step" relaxation model. The H/D isotope effect may originate through a VET that involves low-frequency C-H modes of the PAH cation 25 and vibrational modes of glass-forming superstructural units in the first solvation shell. In vitreous (v-) B 2 O 3 , 85% of boron atoms are incorporated in planar trimeric B 3 O 6 ("boroxol") rings whose breathing mode at 808 cm -1 is the strongest peak in the Raman spectrum of this solid. 2S A matching peak was observed in the vibrational density of state (VDOS) of the v-B 2 O 3 that was obtained in inelastic neutron scattering experiments of Sinclair and co-workers. 2S It is likely that PAH molecules in boric acid are sandwiched between these boroxol rings (a related crystalline solid, α-HBO 2 , consists of infinite sheets of hydrogen-bonded boroxol rings), 3S i.e., these rings are the most likely acceptor of the vibrational energy from a "hot" D 0 state of a PAH cation. The 808 cm -1 breathing mode of the boroxol ring is close in energy to out-of-plane C-H modes of planar PAH cations (that cover the interval between 770 to 900 cm -1 and make a large contribution to the calculated VDOS and IR emission spectra of the PAH cations). 25 For example, naphthalene-h 8 + has one of these modes at 849 cm -1 , anthracene-h 10 + -at 912 cm -1 , and pyrene-h 10 + -at 861 cm -1 . 25 The close match in energy between the two most prominent VDOS peaks for the cation and the matrix would account for the efficient VET from the out-of-plane C-H modes of the PAH cation to a nearby boroxol ring. Typical isotope effects on the vibrational frequency for these C-H modes in the perdeuterio and perprotio cations are 1.15-1.2 (as compared to 1.33-1.36 for the high-frequency C-H stretch modes). 25 Thus, the corresponding VET rate could exhibit an isotope effect. Since the effective temperature of the "hot" state is much higher than that of the matrix, this transfer rate does not depend on the sample temperature. By contrast, cooling of the first solvation shell depends on the sample temperature whereas it obviously does not depend on the isotope composition of the dopant. We speculate that the heat transfer from the first solvation shell to the glass bulk occurs mainly by emission of acoustic phonons. 4S For T>10 K, the typical phonon mean free path in a glass is 1-3 nm, 4S which accounts for efficient transfer of heat from the "hot" first solvation shell to the glass matrix. The transfer rate should approximately correlate with the thermal conductivity of the glass. For vitreous B 2 O 3 (whose structure is similar to that of boric acid glass) the macroscopic thermal conductivities at 50 and 300 K are 0.2 and 0.52 W/(m . K), respectively. 23 As seen from Fig. 2(a), 1/τ s for naphthalened 8 + changes by approximately the same factor. For all amorphous solids, the thermal conductivity is nearly constant between 2 and 100 K (the so-called "plateau" region); above 100 K, the conductivity steadily increases with the temperature. 23,4S Following the same trend, τ s changes little below 100 K and systematically decreases between 100 and 300 K (Fig. 2(a)). 1S. The statistical factor of 1/3 is due to the fact that the magnitude of a population grating is proportional to cos 2 θ, where θ is the angle between the transition dipole for a ground state cation and the direction of electric field in a pump pulse. The polarizations of the pump, probe, and scattered beams all point in the same direction. Therefore, when the transition dipole of the probed state (e.g., D 1 state) has the same orientation as that of the ground D 0 state, the overall weight factor is cos 2 θ × cos 2 θ. When the transition dipole of the probed state (e.g., D 1 state) is perpendicular to that of the D 0 state, this factor is cos 2 θ × sin 2 θ × cos 2 χ, where χ is the asimuthal angle of the D 1 state dipole in a plane that is normal to the D 0 state dipole. Angle averaging of these two factors gives 2/5 and 2/15, respectively, and the ratio is 1/3. 2S. Hannon ; Gaussian curves are drawn through these traces. The vertical error bars indicate 95% confidence limits obtained using a bi-tail Student's t-distribution for an average of 100 scans. For anthracene + , the kinetics for h 10 and d 10 isotopomers are identical within the experimental error; these kinetics closely follow the time profile of the excitation pulse. For biphenyl + , there is a subpicosecond component. Within the experimental error, the time constant τ f is the same for both isotopomers. Note that the slow component for biphenyl-h 10 + has higher weight than the slow component for biphenyl-d 10 + . In more detail, these slow kinetics can be seen in Fig. 6S. (a) Decay kinetics of the TG signal anthracene-h 10 + (quasi logarithmic time scan) in boric acid glass at 295 K. These kinetics were obtained using 720 nm, 70 fs pump and probe pulses (D 0 → D 2 excitation); the autocorrelation trace for this pulse is shown by a solid black line. Vertical bars indicate 95% confidence limits. The picosecond "tail" (multiplied by 5 times) is barely seen in this kinetics. (b) Using 160 fs pulses and more extensive averaging, the slow component can be better observed (the initial "spike" is not shown). The kinetics for anthracene-d 10 + and anthracene-h 10 + in boric acid glass, at 15 K and 295 K, are plotted together (see the legend in the plot). These kinetics were normalized at the signal maximum. The lines drawn through the symbols are single-exponential least-squares fits of the kinetic traces. The time constants are given in Table 2. The relative weight of the slow component in the perprotio cation is greater than in a perdeuterio cation, both at 15 K and 295 K. Fig. 6S Slow decay kinetics of the TG signal for (a) biphenyl-h 10 + and (b) biphenyl-d 10 + in boric acid glass (filled circles for 295 K and empty squares for 15 K). These kinetics were obtained using 680 nm, 160 fs pump and probe pulses (D 0 → D 3 excitation). The initial "spike" is not shown. The kinetics were normalized at the signal maximum. (c) A comparison between the kinetics for biphenyl-h 10 + (filled circles) and biphenyl-d 10 + (empty circles) at 15 K. The lines drawn through the symbols in (a)-(c) are singleexponential least-squares fits of the kinetic traces. Fig. 7S Evolution of the decay kinetics of TG signals for perylene-h 12 + in boric acid glass as a function of temperature T of the sample (15 to 240 K, see the legend in the plot). These kinetics were obtained using 540 nm, 160 fs fwhm pump and probe pulses (D 0 → D 5 excitation). An offset 170 K trace is shown separately (crosses); the line through the symbols is a single-exponential least-squares fit. The time constants τ s are given in Fig. 2(a), open squares. For perylene, the subpicosecond component is lacking, and the slow decay kinetics are temperature independent.
2019-04-06T00:43:44.884Z
2003-12-04T00:00:00.000
{ "year": 2004, "sha1": "66f1c392dfd530153ffeeb168bb813eb69228306", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "66f1c392dfd530153ffeeb168bb813eb69228306", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
231682770
pes2o/s2orc
v3-fos-license
Experimental dopaminergic neuron lesion at the area of the biological clock pacemaker, suprachiasmatic nuclei (SCN) induces metabolic syndrome in rats Background The daily peak in dopaminergic neuronal activity at the area of the biological clock (hypothalamic suprachiasmatic nuclei [SCN]) is diminished in obese/insulin resistant vs lean/insulin sensitive animals. The impact of targeted lesioning of dopamine (DA) neurons specifically at the area surrounding (and that communicate with) the SCN (but not within the SCN itself) upon glucose metabolism, adipose and liver lipid gene expression, and cardiovascular biology in normal laboratory animals has not been investigated and was the focus of this study. Methods Female Sprague–Dawley rats received either DA neuron neurotoxic lesion by bilateral intra-cannula injection of 6-hydroxydopamine (2–4 μg/side) or vehicle treatment at the area surrounding the SCN at 20 min post protriptyline ip injection (20 mg/kg) to protect against damage to noradrenergic and serotonergic neurons. Results At 16 weeks post-lesion relative to vehicle treatment, peri-SCN area DA neuron lesioning increased weight gain (34.8%, P < 0.005), parametrial and retroperitoneal fat weight (45% and 90% respectively, P < 0.05), fasting plasma insulin, leptin and norepinephrine levels (180%, 71%, and 40% respectively, P < 0.05), glucose tolerance test area under the curve (AUC) insulin (112.5%, P < 0.05), and insulin resistance (44%—Matsuda Index, P < 0.05) without altering food consumption during the test period. Such lesion also induced the expression of several lipid synthesis genes in adipose and liver and the adipose lipolytic gene, hormone sensitive lipase in adipose (P < 0.05 for all). Liver monocyte chemoattractant protein 1 (a proinflammatory protein associated with metabolic syndrome) gene expression was also significantly elevated in peri-SCN area dopaminergic lesioned rats. Peri-SCN area dopaminergic neuron lesioned rats were also hypertensive (systolic BP rose from 157 ± 5 to 175 ± 5 mmHg, P < 0.01; diastolic BP rose from 109 ± 4 to 120 ± 3 mmHg, P < 0.05 and heart rate increase from 368 ± 12 to 406 ± 12 BPM, P < 0.05) and had elevated plasma norepinephrine levels (40% increased, P < 0.05) relative to controls. Conclusions These findings indicate that reduced dopaminergic neuronal activity in neurons at the area of and communicating with the SCN contributes significantly to increased sympathetic tone and the development of metabolic syndrome, without effect on feeding. Introduction Many vertebrate species in the wild exhibit annual cycles of metabolism, oscillating between seasons of obese, insulin resistance and lean, insulin sensitivity [1,2]. The ability to anticipate a season of low food availability by the endogenous induction of the obese, insulin resistant state supports survival during such a subsequent season when food availability is scarce. Physiological studies of seasonal animals have established important roles for interactions of circadian rhythms of neuroendocrine events in the manifestation of seasonal physiology, including metabolism. The entire seasonal repertoire of metabolic events in representative species among teleost, avian, and mammalian vertebrate classes can be induced in animals maintained on 24-h constant light conditions by varying the circadian-time of administration of levo-3,4, dihydroxyphenylalanine (L-DOPA), the precursor to dopamine, relative to the circadian-time of administration of 5-hydroxytryptophan (5HTP), the precursor to serotonin over an approximate ten day treatment period [3][4][5]. That is, the response to L-DOPA functions at one circadian time of day relative to a static timed administration of 5HTP, to induce seasonal obesity while it functions at another circadian time of day relative to the same static timed administration of 5HTP to induce the seasonal lean condition. Inasmuch as the suprachiasmatic nuclei (SCN) are the seat of the circadian pacemaker system of the vertebrate body that function via the neuroendocrine axis to synchronize temporal biology (e.g., daily metabolism) with the cyclic environment, it was postulated that such L-DOPA effects were acting at least in part by modulating SCN output function [4]. It was subsequently observed that the circadian peak input of dopamine release at the SCN differs in seasonal obese, insulin resistant and seasonal lean, insulin sensitive rodents. The circadian peak of dopamine release at the peri-SCN area in seasonal lean, insulin sensitive animals (at the onset of locomotor activity) was markedly diminished in naturally occurring (seasonally) obese, insulin resistant animals and also in obesogenic diet-induced glucose intolerant animals [6]. However, whether such reduction in peri-SCN dopaminergic input signaling is causal in the induction and maintenance of the insulin resistance syndrome, a constellation of pathologies of insulin resistance, obesity, and hypertension has never been evaluated. Hypertension is a common correlate of obesity [7] and elevated sympathetic tone is a common pathophysiological condition associated with both hypertension and the hyperinsulinemic/insulin resistant obese condition in animals and humans [8]. More importantly, such increased sympathetic nervous system (SNS) activity is a potent stimulus for development of insulin resistance syndrome, type 2 diabetes, and cardiovascular disease [9][10][11][12][13], currently the most prevalent diseases on earth [14]. The cause-effect relationship between elevated SNS tone and insulin resistance syndrome remains poorly understood inasmuch as a positive feedback loop exists between these two pathophysiologies [15]. However, a common CNS neurologic contributory factor/circuitry for both the insulin resistance syndrome and elevated SNS tone may include the SCN. The SCN is a major control center for regulation of the autonomic nervous system (ANS) as it communicates with and regulates the output of preautonomic neuronal centers in the brain (e.g., the paraventricular nucleus [PVN], the ventromedial nucleus [VMH], the arcuate nucleus [ARC]), projections from which impinge on and regulate preganglionic neurons of the ANS that regulate both metabolism and vascular SNS tone [16][17][18][19][20][21][22][23]. However, what factors regulate SCN output signals to modulate both metabolism and vascular tone, remains incompletely defined. Given the association of diminished circadian peak dopaminergic activity at the peri-SCN area with insulin resistance and the presence of dopamine D1 and D2 receptors in the SCN and peri-SCN regions [24], we hypothesized that a chronic diminution of dopaminergic input activity to the peri-SCN/SCN clock area is actually operative in facilitating the obese, hyperinsulinemic/insulin resistant and hypertensive state. We therefore investigated this major question by assessing the metabolic (glucose tolerance, insulin sensitivity, plasma leptin level, adipose and liver lipid metabolism genes expression, body fat level) and vascular hemodynamic (blood pressure, heart rate, and circulating norepinephrine level) impact of dopaminergic neuron lesion at the peri-SCN area (via neurotoxic lesion) in young, female Sprague-Dawley rats, a model that maintains normal body fat, glucose metabolism and vascular biology for a majority of its lifespan. Animals Female Sprague-Dawley rats obtained from Taconic Biosciences (Hudson, NY) (where they are routinely bred and maintained on 12-h daily photoperiods for extended generations [Taconic Biosciences]) were used in these studies. Such animals at 10 weeks of age; (body weight 220 ± 3 g) were maintained on long 14-h daily photoperiods (14hours light/10 h dark) typical of the summer lean, insulin sensitive season in temperate zone rodents [1] and allowed to feed regular rodent chow (2018 Teklad rodent diet, Envigo, East Millstone, NJ) ad libitum. To avoid the confounder of age-induced insulin resistance upon the background metabolic status of the study animals during the long duration of the study time period, female rats were used since they maintain a steady state of insulin sensitivity from an early age for a long period of their lifetime versus male rats of this strain that develop insulin resistance progressively from an early age [25,26]. Potential influence of estrous cycle day on study parameter outcomes was minimized by random daily investigation of study endpoints across random animals within the study groups over a 7-day time period. However, it has previously been observed that the estrous cycle day does not influence hypothalamic glucose sensing response to elevated (post-meal) glucose levels to impact insulin mediated glucose disposal [27]. Rats were habituated to our climate-controlled animal facilities for at least 7 days before initiation of any experimentation. Experimental design Two separate studies were conducted to assess the impact of peri-SCN area dopamine neuron lesion on peripheral glucose tolerance, insulin sensitivity, adipose and liver lipid metabolism gene profile, obesity, and vascular biology. In Study 1, rats were randomly assigned to one of two treatment groups (N = 8/group) and infused bilaterally at the peri-SCN area with either vehicle or the dopamine neurotoxin, 6-hydroxydopamine (6-OHDA) at 8 μg/side 20 min following systemic intraperitoneal administration of protriptyline (20 mg/kg, i.p.) to protect norepinephrine and serotonin neurons. Then, an intraperitoneal glucose tolerance test (GTT) was performed at 16 weeks following the lesion to examine any effect on peripheral glucose metabolism and insulin sensitivity during the GTT. Body weight change from baseline was also obtained. In Study 2, based upon the results of Study 1, animals were similarly treated as in Study 1 with the exceptions that the 6-OHDA dose was lowered to 2-4 μg/side (there was no significant metabolic response difference between 2 and 4 μg/side 6-OHDA doses, and data were combined for analysis vs vehicle control), GTT data were obtained at both 8 and 16 weeks following lesion, and measures of humoral factors regulating metabolism, adipose and liver metabolic gene expressions, body fat store levels, and vascular biology (blood pressure and heart rate) were also taken at week 16. The treatment group (N = 14) received an infusion of dopaminergic neurotoxin into the peri-SCN area bilaterally; the other group (N = 8) received a vehicle infusion. Intraperitoneal GTTs were performed 8 and 16 weeks after the neurotoxin infusion. Blood pressure and heart rate were measured after two days recovery from the GTT at 16 weeks. Body weight and food consumption were monitored during the course of the study. Animals were sacrificed after the vascular biology assessments and blood samples were collected for analyses of humoral metabolic factors, including plasma insulin, glucose, norepinephrine (NE) and leptin. Parametrial and retroperitoneal fat pads were removed and weighed as an index of body fat store level. Adipose and liver tissues were stored at -80 0 C for analyses of lipid metabolism gene expressions. A separate histological study was conducted to verify the viability of the SCN neurons several days following such peri-SCN 6-OHDA treatment. Also, a separate study was conducted to verify the specific peri-SCN dopaminergic neuronal lesion several days following such peri-SCN administration of 6-OHDA. All animal experiments were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Experimental Animals (2011) and also with the protocols approved by the Institutional Animal Care and Use Committee of VeroScience, LLC. Peri-SCN 6-OHDA infusion impact on SCN neurons Four weeks after peri-SCN neuron lesioned (4 or 8 µg/ side 6-OHDA plus protriptyline 20 mg/kg i.p. to protect norepinephrine and serotonin neurons) rats were sacrificed by decapitation. Their brains were quickly collected, and postfixed in buffered formalin and then transferred to buffered sucrose solution, frozen, and sliced into 50-mm coronal sections through SCN in a cryostat. Sections were mounted on a gelatin-coated slide and stained with cresyl violet to assist in evaluation of the lesions post infusion of 6-OHDA. Peri-SCN 6-OHDA infusion selective impact on peri-SCN dopaminergic neurons Animals were randomly divided into three groups (N = 8/ group). Following 6-OHDA neurotoxin injection to the peri-SCN area (at a dose of either 2 or 8 µg/SCN side) 20 min following systemic intraperitoneal administration of protriptyline [20 mg/kg, i.p.] or vehicle treatment to the peri-SCN area as described below, a 30-gauge stainless steel microdialysis guide cannula was stereotaxically implanted at the right side of SCN with coordinates: 1.3 mm posterior to bregma, 0.4 mm right side of lateral to the midsagittal suture, and 8.4 mm ventral to the dura. The cannula was anchored firmly to the skull with stainless steel screws and secured to the surface with dental cement. Microdialysis experimentation was conducted after 12 to 20 days of lesioning. During the test days, each animal was placed in an acrylic bowl with free access to food and water. A 32-gauge dialysis probe with a 1-mmlong tip of semi-permeable membrane (20,000 molecular weight cutoff ) was inserted into the guide cannula and the probe membrane protruded 1 mm outside the guide cannula. Using a microinjection pump (CMA/100), Cerebral Spinal Fluid (CSF) solution was continuously perfused through the probe at a rate of 0.5 µl/min. The probe was connected to the microinjection pump by micrbore Teflon tubing through a counterbalanced 2-channel Peri-SCN dopamine neuron-selective lesion with 6-OHDA 6-OHDA neurotoxin treatment to the peri-SCN area was performed in a manner to damage only dopaminergic neuronal input to the SCN without damaging neurons within the SCN itself (see Results, Fig. 1). Each animal was anesthetized with ketamine/xylazine (60/5 mg kg −1 body weight, i.p.) and placed in a stereotaxic frame (David Kopf ). A double stainless steel guide cannula was implanted at coordinates 1.3 mm anterior to bregma, 0.4 mm each side of lateral to the midsagittal suture, and 7.4 mm ventral to the surface of the skull (landing at 2 mm above SCN). The injection cannula (33gauge) inserted through the guide cannula extended to a total depth of 9.4 mm. 6-OHDA was infused bilaterally to the peri-SCN area of each animal to selectively damage or destroy dopaminergic neuron terminals outside of the SCN, twenty minutes after each animal received an intraperitoneal injection of protriptyline (20 mg/kg, i.p.) to block neurotoxic effects of 6-OHDA to noradrenergic and serotonergic neurons [28]. This is a well established method to selectively damage dopaminergic neurons without affecting noradrenergic or serotonergic neurons [28]. Although there are dopamine D2 and D1 receptors and amino acid decarboxylase within the SCN itself, there are no tyrosine hydroxylase positive neuronal cell bodies within the nucleus [24], thus precluding damage to SCN neruons with this procedure. In Study 1, rats were subjected to infusion of either 6-OHDA (8 μg/side, N = 8) in saline containing 0.2% ascorbic acid or vehicle (saline containing 0.2% ascorbic acid, N = 8) at the peri-SCN area bilaterally. The intra-cannula infusion was carried out over 2 min at a flow rate of 0.2 µl/min (a total injection volume of 0.4 µl for each side of SCN). A further 60 s was allowed after the infusion for the solution from the tip of the cannula to diffuse into the peri-SCN area. In Study 2, rats were similarly handled, prepared and treated with 6-OHDA infusion at the peri-SCN area at 2-4 μg/side (N = 14) or vehicle (N = 8). Glucose tolerance test Glucose tolerance tests were performed 5 h after light onset at 16 weeks following the SCN 6-OHDA lesion in Study 1 or at 8 and 16 weeks following the peri-SCN area 6-OHDA lesion in Study 2. A 50% glucose solution was administered intraperitoneally (3 g/kg body weight) and blood samples were taken from the tail before glucose administration and 30, 60, 90, 120 min after glucose injection. Blood samples were collected into vials with 5 µl EDTA and were immediately separated by centrifugation and stored at -80 C until assay of insulin. Matsuda index [29] was calculated to assess insulin sensitivity. Assay of metabolic parameters Blood glucose concentrations were determined at the time of blood collection by a blood glucose monitor (OneTouch Ultra, LifeScan, Inc; Milpitas, CA, USA). Plasma insulin, leptin and NE were assayed by an enzyme immunoassay using commercially available assay kits utilizing anti-rat serum and rat insulin, leptin and NE as standards (ALPCO Diagnostics, Salem, NH, USA). Assay of adipose lipid metabolism genes expression Total RNA was isolated from frozen parametrial adipose tissue samples utilizing Trizol Reagent (Ther-moFisher). Total RNA quantity and purity was determined by UV spectroscopy and the concentration was normalized prior to reverse transcription reaction. Reverse transcription was performed with iScript RT Supermix for RT-qPCR (BioRad), followed by qPCR. The mRNA quantities of studied genes were assessed with use of the following probes: hormone sensitive lipase (HSL) was determined with ThermoFisher Assay Rn00689222, phosphoenolpyruvate carboxykinase (PEPCK1) was determined with ThermoFisher Assay Rn01529014, phosphoenolpyruvate carboxykinase 2, (mitochondrial PEPCK2) was determined with ThermoFisher Assay Rn03648110, fatty acid synthase (FAS) was determined with ThermoFisher Assay Rn01463550, acetyl-CoA carboxylase 1 (ACC1) was determined with ThermoFisher Assay Rn01456588 and SsoAdvanced Universal Probes Supermix (BioRad). Blood pressure and resting heart rate measurements Systolic and diastolic blood pressure (BP) and resting heart rate (RHR) were non-invasively measured by determining the tail blood volume with a volume pressure recording sensor and an occlusion tail-cuff (CODA-6 non-invasive blood pressure system, Kent Scientific Corp. Torrington, CT) in conscious rats during the animals' normal sleeping period of the day (5 h after light onset) following the manufacturer's instructions. Several days before experimental recordings, rats were acclimated to the restraining cage and the tail cuff to minimize or reduce any stress influence on the readings. BP and heart rate values per animal were the result of an average of 6-8 measurements. Statistical analysis All date are expressed as mean ± SEM. Data comparing the group mean values of the plasma glucose and insulin during GTTs, and neurotransmitter levels in the microdialysis experitment were analyzed by two-way repeated measures ANOVA or one-way ANOVA as appropriate, followed by Duncan's New Multiple range tests. Differences in the group mean values of the weight change from baseline, areas under the glucose or insulin curves during GTTs, fasting plasma insulin, glucose, NE or leptin levels, blood pressures, and hear rate were each determined by unpaired t-tests. A statistical value of P < 0.05 (2-tailed) was considered statistically significant. Effect of peri-SCN area 6-OHDA infusion on SCN viability With the placement of the infusion cannula targeted at 0.4 mm lateral of the midline, just outside the SCN as in all 6-OHDA infusion studies conducted herein, neither the cannula nor the infusion of 6-OHDA caused damage to the SCN itself as observed from histological examination of the SCN and surrounding area 4 weeks following the treatment (see Fig. 1). A representative coronal section from a peri-SCN dopamine neuron lesioned (4 µg/ side 6-OHDA plus protriptyline 20 mg/kg ip) rat brain is shown in Fig. 1. The cresyl violet staining demonstrates normal neuronal anatomy with no damage at the peri-SCN infusing area. Effect of peri-SCN 6-OHDA infusion on peri-SCN dopaminergic neuronal activity. At 12-20 days after the peri-SCN area 6-OHDA infusion at either 2 or 8 μg/side, peri-SCN extracellular dompaminergic activities were significantly reduced by 35% and 64% in animals with low and high dose of 6-OHDA lesion (Fig. 2). Two way ANOVA with repeated measures on DA indicates a significant group effect (F 2, 21 = 3.759, P < 0.05). Post hoc Duncan test shows reduced extracellular DA in animals with low and high dose of 6-OHDA lesion compared to controls (P < 0.05, Fig. 2a). There was also a significant group effect of 6-OHDA upon both the extracellular DA metabolites DOPAC and HVA (Twoway ANOVA with repeated measures: F 2, 21 = 5.378, P < 0.05, F 2, 21 = 5.291, P < 0.05 respectively). Both 6-OHDA 2 and 8 ug/SCN side lesioned groups show greately reduced extracellular DA metabolite DOPAC (84 and 74%, respectively) and HVA (93 and 84%, respectively), compared to vehicle group (P < 0.05). There was no difference in treatment effect between the two different dosed lesioned groups. Such 6-OHDA infusion at either 2 or 8 ug/SCN side did not significantly affect a b c Study 1 Body weights of the two groups of rats at the start of experiment were 204.3 ± 6.1 (vehicle control) and 207.7 ± 2.1 (neurotoxin group). At 16 weeks following the peri-SCN area 6-OHDA infusion at a dose of 8 μg/ side, the total area under the insulin GTT curve was increased by 95% (from 0.72 to 1.40 ng/ml/min, P < 0.05) without changing the GTT glucose AUC (p < 0.05 one tail), compared with vehicle peri-SCN area treated controls ( Fig. 3a-d). Such 6-OHDA peri-SCN area treatment reduced insulin sensitivity by 52% (Matsuda Index, P < 0.05 one tailed) relative to control animals (Fig. 3e). Body weight gain over the study period in the peri-SCN area dopamine lesioned group was 37% greater than that among the sham controls (79% increase vs 58% increase, P < 0.02, Fig. 3f ). Study 2 Body weights of the two groups of rats at the start of experiment were 219 ± 5 (vehicle control) and 221 ± 2 (neurotoxin group). At 8 weeks after the peri-SCN area 6-OHDA infusion at 2-4 μg/side, no treatment effect upon the GTT was demonstrable versus control ( Fig. 4ab). However, peri-SCN area DA neuron lesion significantly increased plasma insulin (P < 0.05, Fig. 4c), but not glucose (Fig. 4d) during the GTT at 16 weeks following the lesion, compared with vehicle control. The total stimulated area under the insulin GTT curve was increased by 113% (from 1.6 ± 0.3 to 3.4 ± 0.5 ng/ml/min, P < 0.05) (Fig. 4e), without change in GTT glucose AUCcompared with vehicle infusion during GTT (Fig. 4f ). Insulin sensitivity was reduced by 44% (Matsuda Index, P < 0.02) in peri-SCN area dopamine neuron-lesioned animals relative to control animals (Fig. 4g). Peri-SCN area DA neurotoxin also increased fasting plasma insulin by 180% (from 0.5 ± 0.1 to 1.4 ± 0.3 ng/ml, P < 0.02, Fig. 5b 1.5 ± 0.2 ng/ml, P < 0.05, Fig. 5c) at 5 h after light onset in peri-SCN area DA lesioned animals compared to control animals. Dopaminergic neurotoxin infusion to the peri-SCN area resulted in an increase in weight gain of 34.8% at 16 weeks following the neurotoxin lesion (53.1% vs 39.4% increase in lesioned vs. control animals body weight, respectively; P < 0.005, Fig. 6a). The parametrial and retroperitoneal fat weights were also increased by 45% (P < 0.02) and 90% (P < 0.001) respectively following peri-SCN dopaminergic neurotoxin administration compared with vehicle infusion (Fig. 6b, c). Daily food consumption over the 16 week study period however was not significantly changed by such lesion treatment (18 g/day vs 17.5 g/day in lesioned vs. control animals, respectively; P = 0.9). Discussion The present study is the first to demonstrate in any species that 6-OHDA lesion of dopamine input neurons to the peri-SCN area in young healthy rodents induces simultaneous increases in measures associated with elevated sympathetic tone including elevated resting heart rate, systolic and diastolic blood pressures and circulating norepinephrine levels concurrent with induction of insulin resistance, glucose intolerance, and increased body fat stores without altering food consumption. The peri-SCN and SCN are known to contain both dopamine D1 and D2 receptors and dopamine input neurons to the SCN area and SCN arise from several brain centers, most prominently from the supramammilary nucleus, and a concurrent, coincident circadian rhythm of dopamine release and dopamine receptor availability at the SCN area (each with daily peak expressions at the onset of locomotor activity at light offset) give rise to the circadian rhythm of dopaminergic acitivty therein [24]. Consequnetly, the methodological approach applied herein is appropriate for assessment of functionality of composite dopaminergic activity at the peri-SCN area (via abolishing input dopaminergic activity across the 24 h period of the day [including the circadian rhythm and peak of dopaminergic activity) therein) in regulating peripheral fuel metabolism and vascular hemodynamics without actually damaging the SCN itself. Moreover, the selective 6-OHDA lesion approach employed at the peri-SCN area induced chronic, sustained reduction in dopamienrigc input activity therein without destruction of noradrenergic or serotonergic function at the site. The concurrent increase in resting heart rate, blood pressure, and plasma NE levels strongly suggestive if not indicative of increased vascular SNS tone or SNS-parasympathetic balance [11] coupled to the development of the obese, insulin resistant state following this peri-SCN area dopamine lesion of young healthy rats generally observed to be resistant to age-associated insulin resistance [11] and maintained on a low fat diet (18% calories from fat) is intriguing and a major finding of the present study, particularly in light of the substantial breadth of data linking elevated sympathetic tone to insulin resisitance syndrome [13,30,31]. Elevated sympathetic tone (including markers thereof such as elevated RHR) both predicts and potentiates future obesity/insulin resistance, type 2 diabetes, and cardiovascular disease in man [13,30,31]. Such elevation of SNS tone (and also of RHR) is associated with increased adipose lipolysis, hepatic and muscle insulin resistance, hyperinsulinemia, vascular a b c inflammation, hypertension, obesity, and metabolic syndrome [9-13, 15, 32-35], collectively, cardiometabolic syndrome. The present findings indicate that both elevated SNS tone and many of these cardiometabolic pathologies can be manifested by reducing dopaminergic input signaling to the peri-SCN area. However, the chronological sequence of these events cannot be ascertained from this initial study. Hyperinsulinemia acting within the brain is known to activate a sustained increase of basal SNS tone and chronically increased SNS tone is known to potentiate insulin resistance and hyperinsulinemia [2,10,12,13,15,30,31,[36][37][38][39][40]. While it may be difficult to ascertain the chicken and egg sequence within this positive feedback loop between hyperinsulinemia inducing increased brain activation of SNS tone [37][38][39][40] and increased SNS tone inducing hyperinsulinemia and insulin resistance [9-13, 15, 32], the current findings suggest that it may be possible that each pathology shares certain common etiological factors, that include altered SCN clock control of cardiometabolic health as a function of persistent low dopamine input activity to this center. Further support for such a postulate is as follows. The SCN is a complex bilateral nucleus of neurons within the hypothalamus, whose circadian clock gene expressions are pivotal in the regulation of whole-body physiology in vertebrates [41][42][43][44]. Interactions of circadian neuronal activities within the SCN generate target-organ specific SCN output signals via both the a b c neuroendocrine and autonomic nervous systems [45,46]. SCN output functions are a primary regulator of autonomic balance and several studies have described its regulatory role in modulating autonomic control of visceral metabolism and vascular biology via its hypothalamic(e.g., VMH, PVN) and other brain center interactions [17,30,[47][48][49]. SCN ablation and clock gene knockdown studies have identified important roles for the SCN it the regulation of the daily rhythm of blood pressure and heart rate in mammals [20,23,50,51] as well as of hepatic insulin sensitivity, glucose tolerance, and lipid metabolism [21,22,52]. However, what input signals to the SCN may regulate its control of vascular biology (and concurrently of metabolism) remain poorly defined and these study results indicate that dopaminergic input signaling to the peri-SCN area plays a role in this regulation [50,53]. In various studies, the decrease in brain (including SCN) dopamine activity in seasonally, genetically, or dietary-induced obese, insulin resistant animals is coupled to increases in VMH noradrenergic and serotonergic activities as well as to increases in PVN neuropeptide Y (NPY), corticotropin releasing hormone(CRH), and noradrenaline levels [2]. These VMH and PVN neurochemical alterations that associate with diminished brain and SCN dopaminergic input activity in insulin resistance syndrome are known to increase sympathetic drive to the vasculature and heart to increase blood pressure and heart rate, respectively and also to the viscera to increase fat mobilization and hepatic glucose output and lipogenesis [18,19,21,[54][55][56]. Furthermore, when such VMH norepinephrine/serotonin or PVN NPY/CRH/ noradrenaline alterations are recapitulated in healthy animals by hypothalamic micro-infusion of these neuromodulators to these sites, these manipulations potentiate hypertension, hyperinsulinemia, hyperleptinemia, obesity, and SNS activation of adipose lipolysis [57,58] very much as is observed in the present study with SCN dopamine neuron lesion. Moreover, systemic treatment with dopamine agonist to animal models of insulin resistance syndrome reverses the above described aberrant VMH and PVN neurophysiological framework while ameliorating the syndrome [59][60][61]. The present study findings indicate that the reduction of peri-SCN dopaminergic activity is not merely associated with the cardiometabolic syndrome but in fact can act causaly to facilitate the onset and maintenance of this pathophysiological condition. Such a neuroendocrine shift driven by such hypothalamic alterations (low SCN dopamine input, elevated VMH NE and serotonin input, elevated PVN NPY and CRN output) can function to facilitate fattening (from hyperinsulinemia) as well as a simultaneous increased adipose lipolysis (from elevated SNS drive to adipose and resistance to the anti-lipolytic effect of hyperinsulinemia) creating a vicious cycle of insulin resistance induced hyperinsulinemia (from elevated FFA mobilization [62]), that supports adipose lipogenesis and central (e.g., hypothalamic) SNS tone activation [9,11,63,64], that in turn drives further adipose FFA mobilization, insulin resistance and hyperinsulinemia. Hyperleptinemia (as observed in peri-SCN area dopamine neuronal lesioned animals in the present study), suggestive of (selective) leptin resistance commonly associated with obesity [65][66][67], further facilitates increased SNS tone a b Fig. 7 Peri-SCN area dopaminergic lesion increased adipose and liver lipogenic enzyme mRNA levels. Animals received an infusion of 6-OHDA (2-4 μg/side) at the peri-SCN area bilaterally, data were collected 17 weeks post-lesion. a Peri-SCN area dopaminergic 6-OHDA lesion (n = 14) impact on mRNA expression for the following genes in the adipose tissue vs vehicle (n = 8): PEPCK1, PEPCK2, G6PDH, FAS, HSL, PDHx, ACC, ME (b) Peri-SCN area dopaminergic lesion impact on mRNA expression for the following genes in the liver: G6PDH, ACC, FAS, CPT1, MCP1, TNFa. Data are represented as the mean ± SEM. * P < 0.05 (2-tailed Student's t-test) or † P < 0.05 significant difference between treatment groups (1-tailed Student's t-test) [68] contributing to the insulin resistance syndrome (including hypertension [69]) and compounds the vicious cycle of dysmetabolism [67]. Whether leptin resistance facilitated the obese-insulin resistant condition or was a consequence of this condition or did/was both cannot be concluded from this study. However, previous studies of several experimental animal models of metabolic syndrome suggest a major etiological role for alteration in ventromedial hypothalamic function including increased noradrenergic [57,59] and decreased leptin function [70] at this center that, importantly, are associated with low brain dopaminergic activity [71,72]. The VMH leptin resistance and noradrenergic hyperactivity that each induce the metabolic syndrome in animals fed regular chow (as in the present study) occur without any change in food consumption (as in the present study) and result from major changes in energy expenditure/utilization processes [57,59,70]. Furthermore, leptin is capable of reducing body weight without altering food consumption [73]. Moreover, systemic treatment of metabolic syndrome animals with dopamine agonist reverses the syndrome and the VMH noradrenergic hyperactivity [71] and appears also to reverse the VMH leptin resistance [72] all without altering food consumption. The present study findings along with these prior investigational results suggest a possible sequence of pathophysiological events in the genesis of metabolic syndrome starting with (stress induced) diminished SCN area dopaminergic neuronal activity that in turn facilitates VMH leptin resistance (and noradrenergic hyperactivity) thus potentiating the subsequent induction of metabolic syndrome with manifest hyperleptinemia in part via a shift in anabolic metabolism towards fat synthesis and accrual (see Figs. 5 and 6) with a possible reduction in total energy expenditure, yet without altering food consumption. Now with the present study findings in hand, further detailed studies of the nature of the involvement of leptin resistance in the expression of the metabolic syndrome induced by SCN-dopaminergic neuronal lesion are warranted. Collectively, these previous and current study observations suggest that loss of the daily dopamine signal to the SCN may act to induce the above described VMH and PVN neurochemical alterations that have been shown to associate with low SCN dopamine input activity and that generate the hypertensive, obese, insulin resistant condition without any requirement of increased caloric intake. While such a vicious cycle as described above appears to have evolved to support survival among animals in the wild against an ensuing long period (season) of low food availability [1,2], in westernized man such a cycle maintained across seasons of the year over extended time can potentiate cardiometabolic pathology [71]. The peri-SCN area dopamine neuronal lesion induction of the obese, insulin resistant state without alteration in food consumption could be the result of reduced energy expenditure and/or of channeling of anabolic processes towards de novo lipogenesis. Available evidence from seasonal animals indicates that this circadian clock system for the regulation of seasonal metabolism functions in large part to shift anabolic metabolism either towards lipogenesis or protein turnover during the fattening or lean seasons of the year, respectively, without change in caloric intake [1]. Also, seasonal fattening is often unaccompanied by decreased energy expenditure (as in overwintering or migratory vertebrate species) [1]. Likewise in the present study, the observed increased body fat store levels without change in food intake following dopaminergic neuronal lesion at the SCN area over the study period argues for either (1) a shift in metabolic energy utilization towards fat accumulation and away from protein turnover, a phenomenon consistent with (a) the natural seasonal fattening event among vertebrates in the wild and also with (b) an observed reduction in body fat and increased protein turnover without decrease in food consumption with systemic dopamine agonist treatment of seasonally obese hamsters [74], or (2) an actual decrease in energy expenditure coupled to increased lipogenic activity, a phenomenon consistent with observed reduction of body fat and lipogenic enzyme activities and increased energy expenditure with systemic dopamine agonist treatment of leptin deficient obese (ob/ob) mice [60]. We therefore examined the influence of the peri-SCN area dopamine input lesion upon lipogenic pathways in adipose tissue and liver, primary sites for fat synthesis in the rat [75]. Interestingly, such SCN dopamine neuron lesion resulted in a coordinated increase in mRNA levels of several key adipose enzymes each critically involved in fat synthesis. G6PDH (which provides NADPH for the reduction reactions of fatty acid synthesis), the pyruvate dehydrogenase complex enzyme, PDHx (which provide the substrate acetyl CoA for fatty acid synthesis), FAS (which enzymatically elongates malonyl CoA subunits to produce the fatty acids for triglyceride synthesis), and PEPCK (which provides the glycerol backbone [via glycerologenesis] for triglyceride synthesis [76]) were all markedly increased in adipose tissue from the peri-SCN area dopamine input neuron lesioned vs. control animals. ACC, the rate limiting enzyme for fatty acid synthesis was also markedly elevated by such intervention however its difference did not reach statistical significance. Such a coordinated increase in gene expression among several key enzymatic pathways that cooperate functionally to increase lipogenesis suggests that the dopamine input signal to the peri-SCN area is one that is critically involved in the SCN output regulation of whole-body lipid metabolism and fat store level. Yet, HSL, the key enzyme regulating the release of FFA from adipose was also markedly increased in the SCN dopamine input neuron lesioned group vs. controls. Such findings suggest that the dopamine input to the peri-SCN area regulates (likely via the neuroendocrine axis) adipose metabolism in a manner that maintains (or enhances) responsiveness to the lipogenic effects of hyperinsulinemia (which was also induced by this treatment) yet that allows for resistance to the anti-lipolytic effects of such hyperinsulinemia (manifested in reduced hyperinsulinemia-induced inhibition of HSL expression), likely potentiated by increased sympathetic drive to the adipose [15,31]. Similar to adipose, mRNA levels of lipogenic enzymes (G6PDH and ACC) in liver were also significantly increased (FAS not significantly). Interestingly, liver MCP1 (a marker of pro-inflammatory macrophages, but also produced by hepatocytes [76]) mRNA was also significantly increased which can contribute to inflammation and insulin resistance [76][77][78][79], and hepatic TNF mRNA, an inflammatory cytokine can be produced by MI macrophages and hepatocytes [76] that can potentiate insulin resistance [80][81][82], was also numerically (but not significantly) elevated. In obesity, adipose insulin resistance respecting insulin's diminished ability to inhibit lipolysis (in conjunction with increased sympathetic activation of adipose) facilitates an increase in plasma FFA level that in turn is a contributing factor to the genesis of insulin resistance respecting insulin action on glucose balance in liver and muscle [9-13, 15, 32, 62]. In accordance with such a mechanistic pathophysiology, lesion of dopamine input neurons to the SCN area that produced such an adipose biochemical profile also induced the insulin resistant, glucose intolerant state. The present study however does not allow for a time-course analysis of inter-relations between these concurrently observed pathophysiologies. Lastly, it should be noted that neither any possible alteration of the circadian pattern of feeding with greater feeding occurring during the sleep cycle of the day yet without altering total daily food consumption, a perturbation that has repeatedly been demonstrated to induce obesity and hypertension in rodents and humans [83] nor a reduced dopaminergic function-induced reduction in physical activity (as observed with reduced nucleus accumbens dopaminergic activity) [84] were assessed in this study but with the current findings now manifest are worthy of future investigation. Interestingly, the environmental factors common to the western lifestyle of high fat diet, altered sleep/ wake architecture, and social stress are well known to reduce brain (mesolimbic) dopamine activity and alter clock function and also to predispose to insulin resistance syndrome and CVD risk [reviewed in [71]. The present findings suggest that reduced dopaminergic input activity to the SCN can be a contributory etiological factor in the genesis of the cardiometabolic syndrome and that such a perturbation to this endogenous control system for cardiometabolic health does not require a hypercaloric or "westernized" high fat diet for its induction. These data when taken in composite suggest that the diminution of the circadian peak amplitude of the dopamine input signal to the SCN may be at least in part the molecular translation of dietary/stress/altered sleep-wake cycle adverse impact on cardiometabolic health. In agreement with the present findings and the association of diminished circadian peak dopamine activity at the SCN in animal models of insulin resistance syndrome, are clinical studies (1) using Positron Emission Tomography scan stable isotope ligand binding studies identifying a reduction in striatal dopamine binding activity among obese, insulin resistant humans, (2) identifying polymorphs of the dopamine D2 receptor that render it less functional associated with obesity and type 2 diabetes among humans, (3) dopamine D2 antagonist use in humans (though not selective for the dopamine D2 receptor) associating with the obese, glucose intolerant condition [71]. Additionally, pharmacological reduction of brain dopamine activity among young healthy humans by administration of a false dopamine precursor (alpha methylpara-tyrosine) for only a day or two is sufficient to induce insulin resistance [85,86]. Although these clinical studies could not specifically investigate the target site of the hypothalamus or any subregion thereof in dopaminergic regulation of metabolism, the present study findings suggest that the dopaminergic function at the hypothalamic SCN area may be a particularly important region within the brain for such dopaminergic control of metabolism. Finally, the present study results are consistent with the demonstrated therapeutic utility of circadian-timed administration to type 2 diabetes subjects of a quick release formulation of micronized bromocriptine (a dopamine D2 receptor agonist) (Cycloset ® ), timed to the portion of the day when brain dopamine activity naturally peaks in healthy individuals to improve insulin resistance, glucose intolerance, hyperglycemia and CVD event rate in type 2 diabetes subjects [71]. One final note on the relation of the present study findings to the observations that type 2 diabetes worsens Parkinson's Disease (PD) disease progression (progressive loss of dopaminergic neurons within the basal ganglia-substantia nigra) [87,88] is worthy of mention for clarification purposes. Molecular neurobiology studies indicate that each of hyperglycemia, oxidative stress and insulin resistance of type 2 diabetes contributes to dopaminergic neuron dysfunction that can exacerbate PD by blocking neuronal growth and synapse generation and potentiating inflammation, mitochondrial dysfunction, apoptosis, and amyloid aggregation [87]. However, studies of other brain dopaminergic neurons such as the mesolimbic system and hypothalamus indicate that natural stress (high fat diet, sleep/wake architecture alteration, or psychosocial stress) or experimentally induced diminution of mesolimbic [87] or hypothalamic (present study) dopaminergic activity potentiates insulin resistance and its sequalae. Consequently, environmental influences such as certain behaviors and diet that reduce brain dopaminergic activities regulating metabolism can feed forward to potentiate oxidative stress, inflammation, insulin resisitance, and hyperglycemia of type 2 diabetes that in turn feeds back to potentially damage central dopaminergic neurons including those of the basal ganglia-substantia nigra to contribute to PD progression in susceptible patients. The nature of this potential cyclic interaction connecting disrupted dopaminergic-clock potentiation of peripheral dysmetabolism and subsequent central dopaminergic neuronal damage/dysfunction warrants further investigation. The limitations of the current study include: (a) the lack of more detailed chronological data on metabolic and autonomic endpoints over the 16 week treatment period so an assessment of the time course between insulin resistance, leptin resistance, and increased sympathetic tone could be made, b) lack of assessment of metabolic rate, behavior, and physical activity level during the study, and c) lack of assessment of protein vs lipid anabolic processes before and following such treatment intervention, though now having the benefit of the current study findings such more detailed studies are warranted. Conclusions In conclusion, selective lesion of dopaminergic input neurons to the peri-SCN area of young healthy rats induces a cardiometabolic pathophysiology characterized by increased heart rate, systolic and diastolic blood pressures and increased plasma norepinephrine levels coupled with obesity, hyperleptinemia, hyperinsulinemia, insulin resistance and glucose intolerance. This cardiometabolic syndrome occurred without any increase in caloric intake of a low fat diet and was associated with a marked up-regulation of lipogenic enzymes in liver and adipose. These findings suggest that dopaminergic communication with the SCN that is diminished in insulin resistant states can be causal in its induction and this induction encompasses hyperinsulinemia, hyperleptinemia, and an over-activation of the sympathetic nervous system, a composite perturbation well known to be associated with and to contribute to cardiometabolic disease [9,11,23,[29][30][31][32]89]. Circadian-timed pharmacotherapies for insulin resistance syndrome subjects (e.g., prediabetes, type 2 diabetes) that help re-establish the normal circadian-peak activity of dopamine function at the SCN and improve cardiometabolic disease (i.e., bromocriptine-QR) [71] may do so in part by such activity at the peri-SCN area.
2021-01-23T15:01:38.632Z
2021-01-23T00:00:00.000
{ "year": 2021, "sha1": "2f4c696c78d44afb231c0b06007f69f39edb7743", "oa_license": "CCBY", "oa_url": "https://dmsjournal.biomedcentral.com/track/pdf/10.1186/s13098-021-00630-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f4c696c78d44afb231c0b06007f69f39edb7743", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
260416641
pes2o/s2orc
v3-fos-license
A Model Selection Approach for Time Series Forecasting: Incorporating Google Trends Data in Australian Macro Indicators This study examined whether the behaviour of Internet search users obtained from Google Trends contributes to the forecasting of two Australian macroeconomic indicators: monthly unemployment rate and monthly number of short-term visitors. We assessed the performance of traditional time series linear regression (SARIMA) against a widely used machine learning technique (support vector regression) and a deep learning technique (convolutional neural network) in forecasting both indicators across different data settings. Our study focused on the out-of-sample forecasting performance of the SARIMA, SVR, and CNN models and forecasting the two Australian indicators. We adopted a multi-step approach to compare the performance of the models built over different forecasting horizons and assessed the impact of incorporating Google Trends data in the modelling process. Our approach supports a data-driven framework, which reduces the number of features prior to selecting the best-performing model. The experiments showed that incorporating Internet search data in the forecasting models improved the forecasting accuracy and that the results were dependent on the forecasting horizon, as well as the technique. To the best of our knowledge, this study is the first to assess the usefulness of Google search data in the context of these two economic variables. An extensive comparison of the performance of traditional and machine learning techniques on different data settings was conducted to enable the selection of an efficient model, including the forecasting technique, horizon, and modelling features. Introduction Forecasting the trends of economic indicators is crucial to policy makers and investors to make informed decisions. However, the official release of the indicators suffers from an information time lag because of the time and effort needed to collect the required data. To address this issue, researchers have aimed to nowcast and forecast the economic indicators. The unemployment rate is one of the key indicators due to its direct connection to the economic cycle and its influence on decision-makers. Several researchers have attempted to improve the forecasting for the unemployment rate for various developed and developing countries. While some authors have applied different machine learning techniques to forecast unemployment [1,2], others have focused on incorporating additional data, in particular online search data, to improve the forecasting accuracy. Ettredge et al. [3] were the first to address such issues and investigated the link between online job searches and the official rates of unemployment in the United States. Additionally, Choi and Varian [4,5] put forward this line of research by describing and illustrating how Internet search data could be used to improve the predictions of several economic indicators such as unemployment claims, retail sales, property demand, and holiday destinations popularity. These two papers have stimulated much recent research in this field. Since none of the researchers investigated the relation between online search data and the unemployment rate in Australia, we chose to assess whether the search behaviour of Australian Internet users can improve the performance of the Australian monthly unemployment-rate-forecasting models. In addition to the unemployment rate, we selected another indicator for our experiments, the number of short-term travellers visiting Australia. Being a destination for millions of tourists, the tourism industry in Australia is directly linked to its economic wellbeing. Forecasting the number of incoming travellers will assist investors in making their investment decisions and government agencies to properly allocate their resources to accommodate the number of travellers. Researchers have used online search data for different applications within the tourism industry. While some have focused on forecasting the hotel demand for particular cities or countries [6][7][8], others such as Feng et al. [9] and Gawlik et al. [10] have assessed the effectiveness of search data in forecasting the number of tourists rather than hotel demand. The selection of the two indicators analysed in this study, which are released monthly by the Australian Bureau of Statistics, was based on their ability to reflect the behaviour of Google users across different geographical locations. While Google Trends data collected within Australia were used to forecast the monthly unemployment rate, we employed globally searched keywords via the Google engine to forecast the number of travellers visiting Australia. This approach enabled us to assess the applicability of Google Trends data for two distinct settings and evaluate the forecasting horizon associated with the behaviours of both local and international users. Furthermore, we present a novel forecasting framework that selects the optimally performing model from two families of techniques suitable for forecasting time series data, namely traditional linear techniques (SARIMA and SARIMAX) and machine learning techniques (SVR). The framework also incorporates feature selection techniques, which play a crucial role in the forecasting process. It should be noted that prior literature had not extensively explored this aspect to the extent that is presented in this paper. In our paper, we examined the predictive power of Google Trends data using support vector regression (SVR) and convolutional neural networks (CNNs) against the traditional linear regression techniques such as SARIMA and SARIMAX in forecasting the two selected time series indicators. The paper is organised as follows. Section 2 presents an overview of the literature on using Google Trends data for economic indicators. Section 3 presents our contribution of applying a data-driven approach to forecast both indicators through a description of the methodology covering the data collection, feature engineering and selection techniques, as well as the forecasting models used in our paper, alongside the evaluation metrics. Section 4 describes the experimental setup, in particular a description of each set of experiments and the datasets associated with them. Section 5 evaluates empirically the forecasting performance of our models and provides an indication as to why a data-driven approach to forecasting is necessary. Section 6 contains a discussion on the suitability of using alternative data and non-traditional techniques for forecasting. Literature Over the last few years, several attempts have been made to explore the potential benefits of using Internet search data in forecasting economic variables [4]. In this section, we present some of the recent studies that have incorporated Internet search data in unemployment and tourism demand forecasting. To the best of our knowledge, our paper is the first to assess the usefulness of Google search data in the context of these economic variables in Australia and the first to compare the performance of traditional and machine learning techniques on different data settings. Unemployment Forecasting Forecasting unemployment has become an area of interest for researchers. There are two areas of focus to improve its accuracy: incorporating additional data sources (mainly Internet search data) and using non-traditional techniques. These studies did not establish whether Internet data can replace or complement traditional methods. Some authors obtained better results when combining both data in their model [16]. Most researchers have suggested using multiple keywords to improve the prediction accuracy of the forecasting models. In this paper, we compared the models using combined data, as well as search data on their own to address this limitation. There is another set of research focused on using alternative techniques to forecast unemployment. Researchers have compared several machine learning techniques such as artificial neural networks (ANNs) [2,30,31], SVR [2], as well as hybrid approaches [1]. They found that their experiments yielded better results than ARIMA models. Considering that researchers have not tested the impact of search data when forecasting unemployment using traditional and machine learning techniques, we were interested in assessing the efficacy of Google Trends in Australia where Google search is widely used. For this forecasting purpose, we employed SARIMA, SVR, and CNNs on an expanded list of search keywords that is related to Australia. Tourism Forecasting The real-time characteristics of Internet search data have motivated researchers to examine their predictive power in the tourism and hospitality industry. The scope of past research varied from forecasting hotel demand to the number of visitors to cities and countries. Several research works have successfully employed Internet data to forecast the demand for hotel rooms and flights for different forecasting horizons [6][7][8]32]. Similar to unemployment forecasting, tourism research has been extended to predict the volume of visitors to cities [33,34] and countries [5,9,10,25,26]. Their results presented higher accuracy when incorporating search data. A limited number of those research works focused on the volume of incoming visitors regardless of their point of departure and did not evaluate the performance of other techniques. While fewer studies have forecasted the number of visitors on a macro level, there has not been an assessment of the benefit of using search data with historical visitors' data using machine learning techniques. In our paper, we used the same approach applied on unemployment data to evaluate the SARIMA and SVR results in forecasting the number of short-term visitors coming to Australia. A similar comparison was performed recently by Botta et al. [35], but instead of using SVR, they deployed an ANN to predict the number of a local museum visitors. We also applied the search keywords used by Feng et al. [9] and tailored them to the Australian context, since they were proven successful in forecasting the number of visitors and they covered different aspects of tourism (food, airline, shopping). Data Collection The initial stage of our research involved data collection. We utilised two main sources of data: Australian economic indicators and Google Trends data. For the economic indicators, we extracted the historical data of two key indicators for the Australian economy from the Australian Bureau of Statistics' website: monthly unemployment rate and monthly number of short-term visitors arriving in the country. These indicators represent essential aspects of the economy, and forecasting their future values would offer significant insights for policymakers and economic stakeholders. Those figures are often calculated by conducting surveys and collecting data from different agencies, leading to a delay in publishing the most-recent numbers. Monthly unemployment rate data are available from February 1978, while the number of visitors' data cover the period starting in January 1991. Australia has a stable economy. The unemployment rate has not surpassed the 10% mark since 1994. Since then, the Australian unemployment rate has fluctuated between 4% and 6%. As seen in Figure 1, there were two spikes/increases in unemployment in the last two decades: once after the GFC and one during COVID-19. Data Collection The initial stage of our research involved data collection. We utilised two main sources of data: Australian economic indicators and Google Trends data. For the economic indicators, we extracted the historical data of two key indicators for the Australian economy from the Australian Bureau of Statistics' website: monthly unemployment rate and monthly number of short-term visitors arriving in the country. These indicators represent essential aspects of the economy, and forecasting their future values would offer significant insights for policymakers and economic stakeholders. Those figures are often calculated by conducting surveys and collecting data from different agencies, leading to a delay in publishing the most-recent numbers. Monthly unemployment rate data are available from February 1978, while the number of visitors' data cover the period starting in January 1991. Australia has a stable economy. The unemployment rate has not surpassed the 10% mark since 1994. Since then, the Australian unemployment rate has fluctuated between 4% and 6%. As seen in Figure 1, there were two spikes/increases in unemployment in the last two decades: once after the GFC and one during COVID-19. Australian unemployment data are seasonal in nature, where the same trend is repeated each year. For example, there has always been an increase in the unemployment rate post December, and this is expected to continue in the future. Since we used the SARIMA model, the parameter m that indicates the cycle of the trend was 12. Figure 2 shows the number of short-term visitors coming to Australia. Australia is becoming a more-popular destination over time. The seasonality in the data is visible through the repeated trends. A closer inspection of Figure 2 shows that the same trends are repeated the same month every year, e.g., an increased number of visitors around Christmas time and during summer. The large drop of the number of visitors on the right-hand side of the chart is due to COVID, when Australia had travel restrictions in place. Australian unemployment data are seasonal in nature, where the same trend is repeated each year. For example, there has always been an increase in the unemployment rate post December, and this is expected to continue in the future. Since we used the SARIMA model, the parameter m that indicates the cycle of the trend was 12. Figure 2 shows the number of short-term visitors coming to Australia. Australia is becoming a more-popular destination over time. The seasonality in the data is visible through the repeated trends. A closer inspection of Figure 2 shows that the same trends are repeated the same month every year, e.g., an increased number of visitors around Christmas time and during summer. The large drop of the number of visitors on the right-hand side of the chart is due to COVID, when Australia had travel restrictions in place. In parallel, we collected Google Trends data related to the aforementioned economic indicators. Google Trends data, which have been offered by Google since 2014, provide the search frequency of keywords, which shows the ratio of the search amount of a certain keyword to the total search amount of all keywords in a certain period of time, and then further normalises the search frequency into the interval of [0, 100], which can avoid changes in the amount of keyword searches due to an increase in the number of users. It represents a rich source of insights about public interest in various topics over time. By selecting search terms related to the economic indicators, we could gauge public interest in these topics and examine the potential predictive power this interest holds for future economic conditions. In parallel, we collected Google Trends data related to the aforementioned economic indicators. Google Trends data, which have been offered by Google since 2014, provide the search frequency of keywords, which shows the ratio of the search amount of a certain keyword to the total search amount of all keywords in a certain period of time, and then further normalises the search frequency into the interval of [0, 100], which can avoid changes in the amount of keyword searches due to an increase in the number of users. It represents a rich source of insights about public interest in various topics over time. By selecting search terms related to the economic indicators, we could gauge public interest in these topics and examine the potential predictive power this interest holds for future economic conditions. In this paper, we searched for keywords related to each of the two target indicators used in this paper. For unemployment, the process of selecting search keywords began by considering what Internet users would search for if they became or were about to become unemployed [4]. It seems sensible to suggest that our searches were likely to be focused on two areas: available benefits to the unemployed and particular websites and keywords that unemployed people may use (e.g., job advertisement website, "job and education" topic search). Table 1 shows the data extracted from the Google Trends service. Centerlink is an Australian government service that offers several benefits including unemployment benefits. Additionally, we incorporated an indicator for "job" to accommodate searches related to job searches that are general in nature and difficult to capture using more specific terms. Seek and Indeed are popular job advertisement websites, which are mainly used to look for job vacancies, so they are also included. Additionally, we added the trend data of searches for the word "unemployment". All the extracted data using Google Trends were restricted to searches within Australia. Figure 3 shows the popularity of four of the Google indicators extracted to forecast the unemployment rate. In this paper, we searched for keywords related to each of the two target indicators used in this paper. For unemployment, the process of selecting search keywords began by considering what Internet users would search for if they became or were about to become unemployed [4]. It seems sensible to suggest that our searches were likely to be focused on two areas: available benefits to the unemployed and particular websites and keywords that unemployed people may use (e.g., job advertisement website, "job and education" topic search). Table 1 shows the data extracted from the Google Trends service. Centerlink is an Australian government service that offers several benefits including unemployment benefits. Additionally, we incorporated an indicator for "job" to accommodate searches related to job searches that are general in nature and difficult to capture using more specific terms. Seek and Indeed are popular job advertisement websites, which are mainly used to look for job vacancies, so they are also included. Additionally, we added the trend data of searches for the word "unemployment". All the extracted data using Google Trends were restricted to searches within Australia. Figure 3 shows the popularity of four of the Google indicators extracted to forecast the unemployment rate. There were some limitations associated with selecting search keywords relevant to unemployment. Centerlink offers several services other than unemployment benefits; therefore, a change in its trend does not necessarily reflect the changes in demand for those benefits. Additionally, there are certain job vacancies relevant to industries such as construction that might not be posted on the "Seek" website. An increase in unemployment in the construction industry might not lead to an increase in access to the popular job search website. Furthermore, there are other popular platforms such as LinkedIn that can be accessed via a mobile application or directly through the website to look for job vacancies. Given the limitation of using Google Trends data, we intended to use the extracted time series data as a proxy for changes in the labour market, rather than an accurate reflection of changes in the Australian unemployment rate. The selected Google indicators to be used in forecasting the number of short-term visitors is shown in Table 2 alongside their reference names used in our code. Those indicators are similar to those used by Feng et al. [9], and they cover different areas of what travellers might need to get to their destination and to facilitate their visit. Terms such as "Australian weather" and "Australian climate" indicate the interest of search engine users in knowing what to wear when visiting Australia. The terms "Australia airline", "Qantas", and "Australian map" indicate the interest to know more on how to get to and navigate There were some limitations associated with selecting search keywords relevant to unemployment. Centerlink offers several services other than unemployment benefits; therefore, a change in its trend does not necessarily reflect the changes in demand for those benefits. Additionally, there are certain job vacancies relevant to industries such as construction that might not be posted on the "Seek" website. An increase in unemployment in the construction industry might not lead to an increase in access to the popular job search website. Furthermore, there are other popular platforms such as LinkedIn that can be accessed via a mobile application or directly through the website to look for job vacancies. Given the limitation of using Google Trends data, we intended to use the extracted time series data as a proxy for changes in the labour market, rather than an accurate reflection of changes in the Australian unemployment rate. The selected Google indicators to be used in forecasting the number of short-term visitors is shown in Table 2 alongside their reference names used in our code. Those indicators are similar to those used by Feng et al. [9], and they cover different areas of what travellers might need to get to their destination and to facilitate their visit. Terms such as "Australian weather" and "Australian climate" indicate the interest of search engine users in knowing what to wear when visiting Australia. The terms "Australia airline", "Qantas", and "Australian map" indicate the interest to know more on how to get to and navigate Australia; Qantas is the flagship carrier of Australia and its largest airline by fleet size, international flights, and international destinations. The extracted search data cover worldwide searches in contrast to the ones used to forecast unemployment, which were restricted to Australia. One of the limitations of using keywords looked up all over the world is that this includes the searches of users within Australia. Searches for those keywords by Australian residents do not contribute to the number of tourists visiting Australia. For this exercise, we assumed that the search for these terms within Australia did not create any noise as there were no noticeable changes in the search trends. Additionally, the search results were limited to the Google engine and did not include the usage of people residing in China due to the restriction on using Google in China. Chinese nationals consist of large proportion of tourists visiting Australia. Feature Engineering After the data collection, we proceeded to the feature-engineering phase. The goal was to transform the collected data into a format that could be more effectively utilised by our predictive models. This involved creating new variables based on our raw data that better represent the underlying trend patterns for the predictive models. This process was applied in our study to increase the predictive performance of our models. For our dataset from the Australian Bureau of Statistics (ABS) and Google Trends, the original data were augmented by creating time-based features. These features were designed to capture the dynamic behaviour and trends in the data over time. These included lagged values of the indicators themselves and derived statistics such as moving averages. We created lag features for each dataset, specifically for the 12 previous months. The assumption here was that the current month's value of a given economic indicator (such as the unemployment rate or visitor arrivals) or Google Trends value would have some correlation with its past values. For instance, if the unemployment rate was high last month, it could likely be high in the current month as well, barring any substantial changes in the economic environment. Lagged features were derived by shifting the time series data by one period (month) to create a new feature (Lag-1), by two periods to create another feature (Lag-2), and so on, up to twelve periods (Lag-12). This was carried out because it is plausible that both the dependent variables and the Google Trends indicators could have monthly seasonality that last up to a year, and we wanted our models to capture this potential seasonal effect. We also created moving average features, which represent the mean of the data points over a specified period. These were calculated for the last 3, 4,. . ., and 12 months. The rationale for creating these features is that, while individual data points (such as a spike in search interest or a dip in unemployment) can be quite volatile, the average value over a certain period can provide a smoother representation of the underlying trend in the data. X average(n) = (X lag(i) + X lag(i+1) + . . . + X lag(n) )/n In addition to the lag and moving average features, a "month" feature was created to capture any potential seasonal effects. This feature represents the month of the year (a number between 1 and 12) at each data point. This is particularly important for data such as tourism, which can show substantial variation depending on the time of year. Table 3 shows a list of all the features created. The high-volatility components associated with time series data are often very difficult to model successfully; hence, a scaling and/or transformation process is usually performed on the series prior to implementing the actual experiments [36]. Since we wished to be able to correctly predict the direction of movement of the number of short-term visitors, we applied a data transformation to the data series, which would result in better performance [37]. Natural logarithm transformations were applied to the data series prior to conducting the SARIMA(X) and SVR algorithms. To achieve a logarithmic transformation with our short-term visitors' data, the following equation was applied. where y t is the transformed number of visitors and p t is the original value. Feature Selection In recent years, many feature-selection methods have been proposed. These methods can be categorised into three [38]: filter, wrapper, and embedded methods. Filter methods calculate the score of each feature and rank them accordingly without dependency on the model. They are simple to implement, easy to interpret, and work effectively with high-dimensional data. Filter methods are fast strategies that provide good results in classification tasks [39][40][41]. An extensive overview of existing filter methods was presented by Lazar et al. [42]. After engineering a wide range of features from the target variables and Google Trends indicators, we applied different feature-selection methods that incorporated recursive feature elimination (RFE) with mutual information (MI) and the f_test. These methods provided us with a robust and diverse perspective on feature importance. For the exogenous variables derived from Google Trends, we used the Pearson correlation to determine the most-relevant variables, which were used to train the SARIMAX model. The wrapper method, RFE, uses a machine learning algorithm (in our case, a Deci-sionTreeRegressor) to rank features by importance and recursively eliminates the leastimportant features. This method can capture interactions between features since it uses a machine learning model for ranking. The filter methods, the f_test and mutual information, rank features based on their individual predictive power. The f_test checks the correlation between each feature and the target variable, while mutual information measures the dependency between the feature and the target. A higher mutual information means a higher dependency. By using these methods together, we obtained the benefits of both: the power of a machine learning model to capture complex relationships and the speed and simplicity of univariate statistics. The filter feature selection approach used for the SVR and CNN models is shown in Figure 4 and described in the snippet below. sionTreeRegressor) to rank features by importance and recursively eliminates the leastimportant features. This method can capture interactions between features since it uses a machine learning model for ranking. The filter methods, the f_test and mutual information, rank features based on their individual predictive power. The f_test checks the correlation between each feature and the target variable, while mutual information measures the dependency between the feature and the target. A higher mutual information means a higher dependency. By using these methods together, we obtained the benefits of both: the power of a machine learning model to capture complex relationships and the speed and simplicity of univariate statistics. The filter feature selection approach used for the SVR and CNN models is shown in Figure 4 and described in the snippet below. 1. Create a training dataset. 2. Perform RFE using a decision tree as an estimator. 3. Select the top 50% of the features from RFE. 4. Compute the mutual information value (MIV) and f_test for the remaining features. 5. Filter out the features based on the f_test and MIV. Select the top 10% of features based on the f_test and the top 25% based on the MIV. The approach of using the Pearson correlation as a feature-selection method for our SARIMAX model is a straightforward, yet effective one given that SARIMAX is not capable of modelling non-linear relationships. The Pearson correlation coefficient measures the linear relationship between two datasets. It ranges from −1 to 1. A correlation of −1 indicates a perfect negative linear relationship; a correlation of 1 indicates a perfect positive linear relationship; a correlation of 0 indicates no linear relationship. In our experiment, we selected only those exogenous variables that have a correlation value greater than 0.4 (either positive or negative) and considered to have a moderate to strong linear relationship with the dependent variable. This could help reduce the dimensionality of our data and might improve the interpretability and performance of our models. In summary, we chose a combination of feature selection and reduction techniques in our experiments to highlight the importance of incorporating such techniques in the modelling process to improve the accuracy of the models. The comparison of different techniques is out-of-scope for this paper. However, the selected techniques can detect different relationship between the created features and the target variable. Forecasting Techniques The seasonal auto-regressive integrated moving average (SARIMA) is an extension of the ARIMA model. ARIMA models are a subset of linear regression models that attempt to use the past observations of the target variable to forecast future values. The "S" in SARIMA stands for seasonal. It adjusts the model to deal with repeated trends. Seasonal data can be easily identified by looking at repetitive spikes over the same period of time. Those spikes are consistently cyclical and easily predictable, which suggests that we should look past the cyclicality to adjust for it. Since SARIMA can only use the past values of Y and X, SARIMAX is used to incorporate exogenous variables. When using SARIMAX, the input data will include parallel time series variables that are used as a weighted input to the model. To find the optimal SARIMA and SARIMAX models, a grid search to determine the value of the parameters for the best model was performed. The best model found will have the lowest Akaike's information criterion (AIC) and Bayesian information criterion (BIC). SARIMA and SARIMAX were used as the baseline models for the time series forecasting of the two Australian indicators of interest: monthly unemployment rate and monthly number of short-term visitors. Since the SARIMAX model can only detect the linearity between the target variable and the past values of the input data, we employed SVR and CNNs to check whether there was non-linearity between the input feature and the target variable, and therefore, the forecasting performance can be improved over that of SARIMAX. SVR, introduced by Drucker et al. [43], is a category of the support vector machines (SVMs), originally introduced by Vapink [44]. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that are close (within a threshold ε) to the model prediction. A detailed analysis and description of SVR can be found in Basak et al. [45], Sapankevych and Sankar [46], and Smola and Schölkopf [47] and an application to the prediction of unemployment in Stasinakis et al. [48]. SVR has been used widely for time series prediction [46], and the application areas are many, such as financial forecasting [49], among others. Convolutional neural networks (CNNs) were introduced by Yann LeCun, Yoshua Bengio, and others in the 1990s [50]. Initially, CNNs were primarily developed and used for computer vision tasks such as image classification. However, CNNs have also been adapted and applied to other domains, including time series analysis and regression tasks. While CNNs were originally designed for image-based data, their ability to learn hierarchical patterns and capture local dependencies in data makes them suitable for analysing time series data as well. In time series analysis, CNNs can be used as regression techniques by applying them to the input data and predicting the target variable. By leveraging the convolutional layers and pooling operations, CNNs can automatically learn and extract relevant features from the time series data, making them powerful tools for time series forecasting and regression tasks. To carry out non-linear regression using SVR and CNN, it is necessary to create a higher-dimensional feature space from the time series data, as discussed in Section 3.2. Model Evaluation In order to evaluate the performance of the SARIMA, SVR, and CNN models on out-of-time sample data, we used two different metrics: mean-squared error (MSE) and symmetric mean absolute percentage error (SMAPE) [51]. These two metrics proxy the accuracy of the model since they distinctly measure the difference between the actual and predicted values. The objective of our experiments was to improve the accuracy of the models; therefore, they seemed appropriate to evaluate the results. The MSE is a metric corresponding to the expected value of the squared error or loss. Ifŷ i is the predicted value of the i-th sample and y i is the corresponding true value, then the MSE estimated over n (number of samples) is defined as: The SMAPE is an accuracy measure based on percentage (or relative) errors, defined as follows: where At is the actual value and Ft is the forecast value. The absolute difference between At and Ft is divided by half the sum of the absolute values of the actual value At and the forecast value Ft. The value of this calculation is summed for every fit point t and divided again by the number of fit points n. A perfect SMAPE score is 0.0, and a higher score indicates a higher error rate. Further statistical significance testing was applied to evaluate the performance of the different techniques and to determine if there were significant differences among them. One approach is to use the analysis of variance (ANOVA) on the predicted values generated by multiple models (ARIMA, ARIMAX, SVR, and CNN). ANOVA assesses the variation between the predicted values of different models and compares it to the overall variation in the data. The goal was to determine if there are statistically significant differences in the performance of the models. After performing ANOVA, if significant differences are detected, further analysis can be conducted using post hoc tests to identify specific pairs of models that significantly differ from each other. One commonly used post hoc test is Tukey's honestly significant difference (HSD) test. The Tukey HSD compares all possible pairs of models and determines if the differences in their predicted values are statistically significant. The statistical significance approach helps with comparing and ranking the models based on their performance and identifying the models that significantly outperform or underperform others. It provides a quantitative and objective measure to assess the statistical differences between the techniques, allowing for informed decision-making in selecting the most-appropriate model for time series forecasting tasks. Experimental Setup In this paper, we sought to examine the out-of-sample forecast performance of the SARIMA, SVR, and CNN models with a focus on two key Australian indicators: unemployment rate and monthly number of short-term visitors. The methodology, delineated in Section 3, was consistently applied across all our experimental setups. Each setup entailed two distinct data periods, one considering all available data up to December 2022, and another ending in December 2019. This approach allowed us to make an equitable comparison between the models built using the full dataset versus those developed using a reduced data subset. The design of our experiments was intended to assess the influence of the COVID-19 pandemic on the correlation between our chosen indicators and Google Trends data. By intentionally omitting data from the last three years and focusing on the pre-pandemic period, we evaluated if the dynamics between the indicators and Google Trends were dissimilar during a relatively more economically stable period. In each experimental setup, we constructed four iterations of each of our 12 models (elaborated further in Tables 4-6). Each iteration was trained and tested on a different dataset corresponding to its unique forecasting horizon. We built two SARIMA models, one that utilised all the historical data and another that incorporated the data from 2005 onwards. The objective here was to assess whether the inclusion of more historical data enhanced the model's performance. Subsequently, two SARIMAX models were devel-oped, one utilising all exogenous variables and another using a subset of selected variables, as outlined in Section 3. This exercise allowed us to juxtapose the performance of SARIMAX with the SARIMA model constructed using more-recent data, as well as to discern if the Google Trends data could bolster the model's accuracy. The SARIMAX model with selected exogenous features served as a comparison point with the original SARIMAX model. Furthermore, we constructed the SVR and CNN models using all features from the target variable and then a subset of these features after implementing the RFE, MI, and f_test. This approach enabled us to contrast the performance of SVR and the CNNs with SARIMA and determine whether the feature selection enhanced the model performance. Later, the SVR and CNN models were constructed using all Google Trends features along with the target variable features for comparison with SARIMAX. The same models were then built using a selected subset of features. This exhaustive comparative analysis enabled us to assess the effectiveness of the machine learning and deep learning models vis-à-vis the conventional ones. It also helped ascertain if the incorporation of Google Trends data enhanced the predictive accuracy of these models and whether contemporary models more effectively encapsulated the relationship between the variables. Moreover, the utility of feature selection in improving outcomes could be gauged. Comparing different experimental sets provided insights into the influence of the Google data on the model performance, in particular by contrasting the model outcomes using datasets that include and exclude the COVID-19 period. Results and Discussion In this section, we present an overview of the experiments' results for each individual set of experiments. Additional comparison between Experiments 1 and 2 and Experiments 3 and 4 were conducted to highlight the difference in the performance between the models and the features selected for different data-driven settings influenced by COVID19. Given the large number of experiments and comprehensive statistical significance tests for the built models, only the comparison of results using the MSE are presented in Table 7, accompanied by the feature-selection results for each set of experiments in Table 8. The full results along with the data and code used to conduct these experiments are available at the following Git repository: https://github.com/a-abdulkarim/time-series-forecasting-p1/ (accessed on 1 July 2023). Experiment 1: The first experiment revealed significant differences in performance among the models across the four forecasting horizons. The SARIMAX_ALL model outperformed all others for the 3-, 6-, and 12-month horizon levels, indicating its strong predictive power in the shortto mid-term. Interestingly, the SARIMA_HIST model, utilising historical data without the inclusion of exogenous variables, performed better for the 24-month horizon, hinting at its efficacy in capturing long-term trends and cycles. Compared to the SARIMA_RECENT, which takes into account only recent data, SARIMA_HIST's superior performance for the 24-month horizon suggested that a broader historical context enhances long-term forecasting. SARIMAX_ALL's outperformance of SARIMA_HIST and SARIMA_RECENT for shorter horizons demonstrated the value of integrating all available features, including exogenous variables, into time series models for short-term forecasts. Experiment 2: In the second experiment, the superiority of the SARIMAX_ALL model continued for the 6-and 12-month horizons, but faced competition from the CNN_TARGET_GI_FS model for the 3-month horizon. This indicates that deep learning models like CNN_TARGET_GI_FS can capture intricate data patterns more effectively in the short-term. For the 24-month horizon, however, the SARIMA_HIST model again outperformed, reaffirming the notion that simpler models utilising a broader historical context fare better in long-term forecasting. Table 8. Feature-selection results. Features Selected Selected Exogenous Variables Experiment 1 Feature selection models, such as SARIMAX_FS and CNN_TARGET_GI_FS, performed comparably to their all-feature counterparts for shorter horizons, suggesting that narrowing down the feature set does not necessarily impair short-term predictive capacity. Experiment 3: The third experiment introduced a new dominant model: SVR_TARGET_GI_FS. This machine learning model with feature selection demonstrated the best performance at the 3and 6-month horizon levels, outperforming both SARIMA variants and SARIMAX_ALL. This suggested that machine learning techniques coupled with feature selection can excel in short-term forecasts. However, the SARIMAX_ALL model still held its ground for the 12-month horizon, and SARIMA_HIST regained superiority for the 24-month horizon. Again, feature selection models showed strong performance. The SVR_TARGET_GI_FS model's superiority for shorter horizons over SARIMAX_ALL indicated that feature selection can even outperform all-feature models in certain situations. Experiment 4: In the final experiment, the deep learning model CNN_TARGET_GI_FS excelled for the 3-and 6-month horizons, while SARIMAX_ALL performed best for the 12-month horizon. For the 24-month horizon, the SVR_TARGET_FS model, a machine learning model with feature selection, surpassed other models, affirming the potency of feature selection for longer-term forecasting. Across all four experiments, the results demonstrated the strengths and weaknesses of each model for different forecasting horizons, the potential advantages of machine learning and deep learning techniques over traditional SARIMA/SARIMAX models, and the possible gains from employing feature selection. Taken together, these experiments provided nuanced insights into the interplay between traditional models (SARIMA and SARIMAX) and more modern, ML and DL techniques. While the former maintained strong performance at medium-term horizons, in particular when supplemented with a complete feature set, the latter-especially when utilising feature selection-appeared more-effective for both short-and long-term forecasting. Thus, the decision between ML/DL and traditional methods hinges on the forecasting horizon, underlining the importance of a targeted approach in time series prediction. Compared to SARIMA_RECENT, which takes into account only recent data, SARIMA_HIST's superior performance for the 24-month horizon suggests that a broader historical context enhances long-term forecasting. SARIMAX_ALL's outperformance of SARIMA_HIST and SARIMA_RECENT at shorter horizons demonstrated the value of integrating all available features, including exogenous variables, into time series models for short-term forecasts. Unemployment Forecasting Comparing both experiments, it was clear that the inclusion or exclusion of the COVID period data significantly influenced the predictive power of the models. In the shorter-term forecasts (3-, 6-, and 12-month horizons), the exclusion of the COVID period seemed to enhance the performance of models such as the CNNs, possibly due to the reduction of unprecedented volatility in the training data. In contrast, the SARIMAX model, which was the most-effective short-and mid-term forecasting model when the COVID data were included, saw its dominance reduced when the COVID data were excluded. This indicated that the SARIMAX model might be particularly effective at accounting for abrupt exogenous shocks such as the COVID pandemic. For the 24-month horizon, the SARIMA_HIST model remained the superior performer, with or without the COVID data, indicating its robustness in long-term forecasting regardless of drastic economic changes. These comparisons highlight the importance of considering the stability of the economic environment and the characteristics of the training data when selecting and interpreting forecasting models. Number of Visitors Forecasting Comparing both experiments, it became evident that the inclusion or exclusion of the COVID period data significantly impacted the models' predictive performance. In the short-term forecasts (3-, 6-, and 12-month horizons), the exclusion of the COVID period data seemed to improve the performance of the CNN with the feature selection model, possibly due to the removal of the unpredictable COVID-induced volatility from the training data. Conversely, the SARIMAX using the all exogenous variables model, which was the most-effective short-and mid-term forecasting model with the COVID data included, saw a reduction in its dominance when the COVID data were excluded. This indicated that the model was particularly potent when dealing with abrupt exogenous shocks such as those experienced during the COVID-19 pandemic. For the long-term 24-month horizon forecast, the SARIMA_HIST model remained the best performer, irrespective of whether the COVID data were included or excluded, highlighting its robustness in long-term forecasting regardless of drastic changes in the economic environment. These findings underlined the importance of considering both the stability of the economic environment and the nature of the training data when choosing and interpreting forecasting models. They also demonstrated how different models may respond differently to periods of economic volatility, further emphasising the need for careful model selection based on the specific context and forecasting horizon. Conclusions This research investigated the efficacy of various traditional and machine learning models in forecasting key economic indicators, namely the monthly unemployment rate and the monthly number of short-term visitors to Australia. It also explored the role of Google Trends data in enhancing the forecasting performance of these models. Overall, the results indicated that both machine learning (ML) and deep learning (DL) models offer considerable advantages over traditional SARIMA and SARIMAX models in forecasting these indicators, particularly in the shorter-term forecasting horizons. For instance, the SVR model demonstrated superior performance over SARIMA and SARIMAX in predicting the unemployment rate across all forecasting horizons in Experiments 1 and 2. Similarly, the CNN model was more effective than its traditional counterparts in predicting short-term visitor numbers in Experiments 3 and 4, especially in the short-to mid-term forecasting horizons. These findings align with the growing recognition of ML and DL techniques as valuable tools in economic forecasting, capable of handling complex data structures and identifying intricate patterns in the data. However, the results also underscored the robustness of traditional models such as SARIMA and SARIMAX in long-term forecasting, reminding us of their enduring relevance in certain forecasting contexts. Importantly, the inclusion of Google Trends data proved to enhance the forecasting performance of several models. Models incorporating Google Trends data, such as SARIMAX and the CNN with feature selection, consistently outperformed their counterparts that relied solely on historical data, particularly in the short-to mid-term forecasting horizons. These findings affirmed the potential of Google Trends data as a valuable supplement to traditional economic data, particularly in an era where digital information plays an increasingly central role in economic activities. This study, however, was not without its limitations. The forecasting performance of the models might be sensitive to the inclusion or exclusion of extreme events, such as the COVID-19 pandemic period data. The volatility introduced by such events can impact the predictive capability of different models in various ways, making it difficult to ascertain the most-effective model across all possible contexts. Furthermore, while the study considered a broad range of models and data types, there are still other potentially useful models and data sources that remain unexplored. For instance, other types of ML and DL models, such as recurrent neural networks (RNNs) and transformers, might offer different insights or outperform the models investigated in this study. Future research should aim to address these limitations and explore these uncharted territories. More-comprehensive investigations could consider a broader range of extreme events and their impacts on different models or investigate other types of ML and DL models and their efficacy in forecasting economic indicators. Moreover, future studies could explore other types of auxiliary data, such as social media data or other online data, to gauge their potential in enhancing economic forecasts. In conclusion, this research underscored the potential of ML and DL techniques in economic forecasting and highlighted the value of integrating Google Trends data into these models. However, it also stressed the importance of model selection based on the specific forecasting context and the need for the continuous exploration of novel models and data sources to enhance our forecasting capabilities. Author Contributions: Conceptualisation, A.A.K., E.P. and S.M.; methodology, A.A.K., E.P. and S.M.; formal analysis, A.A.K.; writing-original draft preparation, A.A.K.; writing-review and editing, E.P. and S.M. supervision, E.P. and S.M. All authors have read and agreed to the published version of the manuscript.
2023-08-03T15:27:02.853Z
2023-07-30T00:00:00.000
{ "year": 2023, "sha1": "e6e7c9a2ed5923c8e832f889a40d09e90024f71a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/25/8/1144/pdf?version=1690708079", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e45a43141c8599d91033e4a305cb8143fca89c4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
207928793
pes2o/s2orc
v3-fos-license
Tip Crack Imaging on Transparent Materials by Digital Holographic Microscopy With this study, we propose a method to image the tip crack on transparent materials by using digital holographic microscopy. More specifically, an optical system based on Mach–Zehnder interference along with an inverted microscopy (Olympus CKX53) was used to image the tip crack of Dammar Varnish transparent material under thermal excitation. A series of holograms were captured and reconstructed for the observation of the changes of the tip crack. The reconstructed holograms were also compared temporally to compute the temporal changes, showing the crack propagation phenomena. Results show that the Dammar Varnish is sensitive to the ambient temperature. Our research demonstrates that digital holographic microscopy is a promising technique for the detection of the fine tip crack and propagation in transparent materials. Introduction Fracture mechanics of crack propagation behavior under static and fatigue loads is an important subject.Over the past few years, mechanical failures have led to a growing awareness of the impact of cracks and stress in manufacturing parts on their failure strength.It is known that the crack bearing pressure or thermal load has an important effect on crack propagation.The number of cracks caused by environmental influences is significant; hence, by considering this relationship, the influence of pressure or thermal load on crack growth behavior can be evaluated by analyzing the stress intensity (or thermal) and the direction and number of crack growths [1,2].P.F.Gao investigated the deformation in the fatigue crack tip plastic zone and its role in the fatigue crack propagation of a titanium alloy with a tri-modal microstructure by combining scanning electron microscopy and electron backscatter diffraction.It was found that the main crack changed by secondary microcracks [3].Similarly, D. Nowell use in situ loading crack tips and observed their dynamics by using an electron microscope [4]. Digital holography is an optical tool that is used to reconstruct the phase and intensity maps of the tested objects simultaneously [5].Digital Holographic Microscopy (DHM) presented in this paper takes advantage of both digital holography and microscopy with a Micro Objective (MO) in the object beam of the digital hologram recording system.Because the tested sample will be magnified through the MO, the resolution of the reconstructed information would reach a sub-micrometer level.DHM technique has been applied successfully in some fields such as Microelectromechanical Systems (MEMS) device morphology, deformation detection [6,7], workpiece surface roughness detection [8,9], non-preparation detection of living cells or biological tissue [10,11], transparent functionally gradient material detection [12], etc. Notable applications of such examples include, for example, P. Asgari, who detected microstructural corrosion in austenitic stainless steel with a DHM, where the stainless steel temperature is increased until it is higher than the critical temperature; thereby, intergranular corrosion can be detected [13].In another application, Shinichi Suzuki used high-speed holographic microscopy to take microscopic photographs instantly when the crack happens.It was found that the crack speed after bifurcation is slightly slower than prior to bifurcation [14]. With this research, we present a tip crack imaging [15] of the in-house made transparent sample by using DHM.The transparent sample was used to simulate the surface of a traditional oil painting.When the external temperature changes, the structure of precious oil paintings have a series of changes such as expansion, contraction, extension, and distortion.The varnish layer on the top of the painting is traditionally used to protect the painted surface materials [14].However, the everyday cycling of temperature changes through the years results in an aesthetically disturbing cracking pattern on the top of painting surface.The cracks also expand, propagate, and deteriorate.The severity of the cracks is also influenced by other factors such as environmental temperature, humidity, and external vibrations [16][17][18].It has been known that cracks either in varnish surface or in the constituted materials of the structure are of the most common problems in art conservation [19].In our research, holograms of the tip crack were recorded with a regular time interval while the temperature continuously changed.The changes were detected by comparing several different reconstructed phases from the recorded holograms. Key Principle of DHM In off axis DHM, the reference beam R(x, y) is a plan wave, as follows: R(x, y) = A r exp[iϕ r ]exp ik t x x, t y y where A r nd ϕ r are the amplitude and phase of the reference beam, respectively, and both are constants for uniform parallel laser beam.t x , t y refer the tilt direction compared to the optical axial.k is wave number.The original object wave O o (x, y) after the plane beam being modulated by the tested sample and the MO is, where A z (x, y) and ϕ z (x, y) are the distribution of amplitude and phase modulated by the object, respectively.The quadratic term means that the spherical phase error is introduced by the MO, and µ is spherical phase curvature radius. Hologram I(x, y) includes a zero-order image, the original objective wave, and the conjugate one [20] as follows: Here, * is a conjugate operation, and a Fourier spectrum window as high pass filter can be used to extract the original objective wave R * (x, y)O o (x, y) [21], For numerical reconstruction of the digital hologram, the original objective wave should be recovered completely; hence, we need to simulate the reference beam R(x, y), and especially the compensation spherical phase in order to remove the spherical phase error (exp − ik 2µ x 2 + y 2 in Equation ( 4) as follows: where µ is the spherical phase curvature radius that needs to be adjusted during numerical reconstruction.The object plane can be reconstructed by a convolution formula under paraxial approximation based on the Kirchhoff scalar diffraction formula [22] as follows: where F is a fast Fourier transform operator.The impulse response function g z (x, y) of the convolution method is as follows: where λ is the wavelength of the laser source, and z is the distance between the object plane and the hologram plane.O r (x, y) is the reconstruction of the complex amplitude of the object wave including the intensity information and the phase information. Preparation the Transparent Sample We select a Chinese writing brush and Dammar Varnish, as shown in Figure 1a,b, to make the transparent sample with the original tip crack.The Dammar Varnish made of gum dammar and turpentine is often used to protect the paint by coating the surface of the paint.We pick up a piece of brush and put it on a clean piece of slide plate.Then the Dammar Varnish is used to paint evenly around the brush.Subsequently, we remove the brush and wait until the Dammar Varnish dries.In this way, a transparent sample with a known defect location can be obtained.As shown in Figure 1c, the area inside the red dotted rectangle is the test sample with a size of approximately 60 mm 2 .The cracks inside the white dotted ellipse are formed at the tip of the brush. where is the spherical phase curvature radius that needs to be adjusted during numerical reconstruction.The object plane can be reconstructed by a convolution formula under paraxial approximation based on the Kirchhoff scalar diffraction formula [22] as follows: (, ) = ℱ ℱ((, ) • (, ) (, )) • ℱ (, ) where ℱ is a fast Fourier transform operator.The impulse response function (, ) of the convolution method is as follows: where is the wavelength of the laser source, and is the distance between the object plane and the hologram plane. (, ) is the reconstruction of the complex amplitude of the object wave including the intensity information and the phase information. Preparation the Transparent Sample We select a Chinese writing brush and Dammar Varnish, as shown in Figure 1a,b, to make the transparent sample with the original tip crack.The Dammar Varnish made of gum dammar and turpentine is often used to protect the paint by coating the surface of the paint.We pick up a piece of brush and put it on a clean piece of slide plate.Then the Dammar Varnish is used to paint evenly around the brush.Subsequently, we remove the brush and wait until the Dammar Varnish dries.In this way, a transparent sample with a known defect location can be obtained.As shown in Figure 1c, the area inside the red dotted rectangle is the test sample with a size of approximately 60 mm 2 .The cracks inside the white dotted ellipse are formed at the tip of the brush. Experimental Setup We set up an off-axis holographic optical system, shown in Figure 2a, which is based on the Mach-Zehnder interference system along with a MO to magnify the sample.As shown in Figure 2, a plane beam, after the He-Na Laser goes through the spatial filter and collimator, can be divided into objective and reference beams.Objective beam includes the diffractive wave of the magnified sample.The objective beam interferes with the reference beam on the hologram plane incident on the sensor plane of Charge Coupled Device (CCD). A 3D sketch according to the experiment system is also given in Figure 2b.The Mach-Zehnder interference system was built based on an inverted microscopy (Olympus CKX53, Olympus, Shanghai, China).The MO has been set to 4X to magnify the sample.The He-Na Laser wavelength is 632.8 nm.DHC MER-500-7UM CCD (DAHENG, Shanghai, China) is used to record the hologram, and its size is 2592(H) × 1944(V) with a single pixel size of 2.2 μm × 2.2 μm.The distance from the image plane of the MO to the CCD is 28 mm, which is the recording distance of a hologram.Similarly, Experimental Setup We set up an off-axis holographic optical system, shown in Figure 2a, which is based on the Mach-Zehnder interference system along with a MO to magnify the sample.As shown in Figure 2, a plane beam, after the He-Na Laser goes through the spatial filter and collimator, can be divided into objective and reference beams.Objective beam includes the diffractive wave of the magnified sample.The objective beam interferes with the reference beam on the hologram plane incident on the sensor plane of Charge Coupled Device (CCD). A 3D sketch according to the experiment system is also given in Figure 2b.The Mach-Zehnder interference system was built based on an inverted microscopy (Olympus CKX53, Olympus, Shanghai, China).The MO has been set to 4X to magnify the sample.The He-Na Laser wavelength is 632.8 nm.DHC MER-500-7UM CCD (DAHENG, Shanghai, China) is used to record the hologram, and its size is 2592(H) × 1944(V) with a single pixel size of 2.2 µm × 2.2 µm.The distance from the image plane of the MO to the CCD is 28 mm, which is the recording distance of a hologram.Similarly, the reconstruction of the original image also takes place where the hologram is back and propagated to a distance of 28 mm.Through the experiment, to stimulate the change of the tip crack, heat was used to illuminate the sample with a thermal 50 watts incandescent lamp.A thermocouple was used to monitor the change of temperature, as shown in Figure 3a.The precision was 0.1 degree centigrade and it was in direct contact with surface.The dotted circle was used to mark the tested area of the tip crack, as shown in Figure 3b.A hologram shown in Figure 3b was captured before the heat was applied.While the sample was heated, six holograms were captured every 4 h.The first step was a 12 h heating.The temperature increased quickly in the beginning and slowly later, and the 12 h heating was used to reach a balance that demonstrates that the temperature changes slowly.When we heated it for 24 h and the temperature was steady, we added 12 h of heating, and then more 4 h of heating.It took 40 h to complete the recording process of all holograms.The temperature reached between 36.5 °C and 38 °C.The heating time and the recorded temperature for each hologram are presented in Table 1.Through the experiment, to stimulate the change of the tip crack, heat was used to illuminate the sample with a thermal 50 watts incandescent lamp.A thermocouple was used to monitor the change of temperature, as shown in Figure 3a.The precision was 0.1 degree centigrade and it was in direct contact with surface.The dotted circle was used to mark the tested area of the tip crack, as shown in Figure 3b.A hologram shown in Figure 3b was captured before the heat was applied.While the sample was heated, six holograms were captured every 4 h.The first step was a 12 h heating.The temperature increased quickly in the beginning and slowly later, and the 12 h heating was used to reach a balance that demonstrates that the temperature changes slowly.When we heated it for 24 h and the temperature was steady, we added 12 h of heating, and then more 4 h of heating.It took 40 h to complete the recording process of all holograms.The temperature reached between 36.5 • C and 38 • C. The heating time and the recorded temperature for each hologram are presented in Table 1. the sample was heated, six holograms were captured every 4 h.The first step was a 12 h heating.The temperature increased quickly in the beginning and slowly later, and the 12 h heating was used to reach a balance that demonstrates that the temperature changes slowly.When we heated it for 24 h and the temperature was steady, we added 12 h of heating, and then more 4 h of heating.It took 40 h to complete the recording process of all holograms.The temperature reached between 36.5 °C and 38 °C.The heating time and the recorded temperature for each hologram are presented in Table 1.On a specific note, for the prevention of the recorded hologram being influenced by the lamp, the lamp was turned off before the hologram was captured; hence, the temperature recorded here in Table 1 is the temperature before the lamp was turned off. Experimental Results and Discussion The hologram was reconstructed using the convolution algorithm [5].The phase distribution is shown in Figure 4a.It is clear that there is a spherical phase error due to the use of 4X MO.A numerical compensation was used to remove such a spherical phase error, which is shown in Equations ( 5) and ( 6).The compensation phases shown in Figure 4b,c present the correction phase distribution, which is the wrapped phase information of the sample before being heated.On a specific note, for the prevention of the recorded hologram being influenced by the lamp, the lamp was turned off before the hologram was captured; hence, the temperature recorded here in Table 1 is the temperature before the lamp was turned off. Experimental Results and Discussion The hologram was reconstructed using the convolution algorithm [5].The phase distribution is shown in Figure 4a.It is clear that there is a spherical phase error due to the use of 4X MO.A numerical compensation was used to remove such a spherical phase error, which is shown in Equations ( 5) and ( 6).The compensation phases shown in Figure 4b,c present the correction phase distribution, which is the wrapped phase information of the sample before being heated.The reconstructed phase distribution after the sample being heated is shown in Figure 5.The differences between the phases in the consecutive phase distribution are clear, indicating the obvious.The thermal influences are due to lamp heating.It is suspected that such changes are due to the fact that the Dammar Varnish material coated on the glass plate was expanded due to the heat.However, new cracks did not occur through these phases.In order to observe the changes of the tip crack on the sample clearly, the phase unwrap was performed [5,23].The unwrapped phase was also differentiated between the initial frame and the consecutive frames.Figure 6 presents the differentiated phase where the differences of the phases are clear. We have selected the most significant section of the phase in Figure 6 at the dashed line position, and have extracted and are showing them in Figure 7.It is clear that the phase increases while the heating time is increased.The minimum phase difference corresponds to the shortest heating time, while the maximum one represents the longest heating time. As shown in Figure 7, on the right part of the dash line, the phase increases.It is therefore safe to conclude that the interspace of the crack has expanded because of heating.As a result, there is internal force caused by the expanded interspace.On the left of the dash line, the interspace should be compressed, which means the phase changes are decreased.Hence, the phase differences on the left of the dash line should correspond to the actual phase differences of the tip crack. This result indicates that the cracking behavior was altered when it was exposed to the thermal The reconstructed phase distribution after the sample being heated is shown in Figure 5.The differences between the phases in the consecutive phase distribution are clear, indicating the obvious.The thermal influences are due to lamp heating.It is suspected that such changes are due to the fact that the Dammar Varnish material coated on the glass plate was expanded due to the heat.However, new cracks did not occur through these phases.In order to observe the changes of the tip crack on the sample clearly, the phase unwrap was performed [5,23].The unwrapped phase was also differentiated between the initial frame and the consecutive frames.Figure 6 presents the differentiated phase where the differences of the phases are clear. We have selected the most significant section of the phase in Figure 6 at the dashed line position, and have extracted and are showing them in Figure 7.It is clear that the phase increases while the heating time is increased.The minimum phase difference corresponds to the shortest heating time, while the maximum one represents the longest heating time.As shown in Figure 7, on the right part of the dash line, the phase increases.It is therefore safe to conclude that the interspace of the crack has expanded because of heating.As a result, there is internal force caused by the expanded interspace.On the left of the dash line, the interspace should be compressed, which means the phase changes are decreased.Hence, the phase differences on the left of the dash line should correspond to the actual phase differences of the tip crack. This result indicates that the cracking behavior was altered when it was exposed to the thermal effect induced by the thermal lamp.Hence, it shows that by the implementation of DHM, the visualization of cracks propagation is viable. Comparison Experiment Setup and Results A contrast experiment was also conducted to verify that the phase changes (Figure 7) are actually caused by a crack being heated rather than by other confounding factors.For this purpose, two new samples were prepared, as shown in Figure 8.One without a crack is shown in Figure 8a, and it is approximately 60 mm 2 , while the cracked one is shown in Figure 8b, and it is about 60-65 mm 2 .For comparison, we did two comparative experiments: the first sample without a crack was heated to observe whether the existed crack affected the phase change, and the second sample with a crack was no longer heated to observe whether heating affected the phase change.For both of samples, a serial of holograms was recorded for every four-hour period, the same as with the experimental group above.The heating time and the temperature of the two samples are listed in Tables 2 and 3, respectively. Comparison Experiment Setup and Results A contrast experiment was also conducted to verify that the phase changes (Figure 7) are actually caused by a crack being heated rather than by other confounding factors.For this purpose, two new samples were prepared, as shown in Figure 8.One without a crack is shown in Figure 8a, and it is approximately 60 mm 2 , while the cracked one is shown in Figure 8b, and it is about 60-65 mm 2 .For comparison, we did two comparative experiments: the first sample without a crack was heated to observe whether the existed crack affected the phase change, and the second sample with a crack was no longer heated to observe whether heating affected the phase change.For both of samples, a serial of holograms was recorded for every four-hour period, the same as with the experimental group above.The heating time and the temperature of the two samples are listed in Tables 2 and 3, respectively.Experimental results about the phase change based on the holograms obtained according to the values of Tables 2 and 3 are shown in Figures 9-12.The phase changes are calculated by the same method mentioned in the previous experiment.We found that the phase of both of the samples does not undergo changes.Hence, that means the crack changes of the tip cracks on transparent material are due to the changes of temperature rather than other factors.Experimental results about the phase change based on the holograms obtained according to the values of Tables 2 and 3 are shown in Figures 9-12.The phase changes are calculated by the same method mentioned in the previous experiment.We found that the phase of both of the samples does not undergo changes.Hence, that means the crack changes of the tip cracks on transparent material are due to the changes of temperature rather than other factors. Conclusions With this study, the cracks and deterioration of transparent material were detected by DHM.The tip crack propagation of the material is clearly shown through the experimental results.Thus, the results support the notion that DHM is a viable candidate to detect the fine tip crack of some transparent materials.Detailed phase changes were shown, demonstrating that varnish material is sensitive to heat.The results show that in order to preserve the Dammar Varnish materials, the insulation of the materials from heat impact is necessary.The consequences of this work in cultural heritage can be multiple.First, it is proven that a DHM system is worth further investigation for its potential to be developed into a portable investigation system capable of being used indoors and outdoors.Secondly, it is a method worth studying further as a method that can be used to document with highest possible accuracy the existing crack condition of important artworks, and to regularly monitor any alterations.This offers not only a highest standard crack documentation method, but also an insight for conservation scientists into how the surface cracks deteriorate and how quickly, allowing them to evaluate the risk in each artwork separately by only the tip crack coordinates.In the future work, it would be interesting to use a tunable wavelength laser to detect the crack propagation of non-transparent material in cultural products. Conclusions With this study, the cracks and deterioration of transparent material were detected by DHM.The tip crack propagation of the material is clearly shown through the experimental results.Thus, the results support the notion that DHM is a viable candidate to detect the fine tip crack of some transparent materials.Detailed phase changes were shown, demonstrating that varnish material is sensitive to heat.The results show that in order to preserve the Dammar Varnish materials, the insulation of the materials from heat impact is necessary.The consequences of this work in cultural heritage can be multiple.First, it is proven that a DHM system is worth further investigation for its potential to be developed into a portable investigation system capable of being used indoors and outdoors.Secondly, it is a method worth studying further as a method that can be used to document with highest possible accuracy the existing crack condition of important artworks, and to regularly monitor any alterations.This offers not only a highest standard crack documentation method, but also an insight for conservation scientists into how the surface cracks deteriorate and how quickly, allowing them to evaluate the risk in each artwork separately by only the tip crack coordinates.In the future work, it would be interesting to use a tunable wavelength laser to detect the crack propagation of non-transparent material in cultural products. Figure 3 .Figure 2 . Figure 3. Photo of the heat loading and a representative hologram.(a) Photo of the heat loading.(b) Representative hologram. Figure 3 .Figure 3 . Figure 3. Photo of the heat loading and a representative hologram.(a) Photo of the heat loading.(b) Representative hologram. Figure 4 . Figure 4. Reconstructed original phase distribution for reference hologram before heating.(a) With the spherical phase error.(b) Numerical compensation phase.(c) Phase distribution following compensation phase. Figure 4 . Figure 4. Reconstructed original phase distribution for reference hologram before heating.(a) With the spherical phase error.(b) Numerical compensation phase.(c) Phase distribution following compensation phase. Figure 7 . Figure 7. Six phase line traces extracted from each of phase difference in Figure 6a-f, respectively. Figure 8 . Figure 8.Samples used in contrast experiments: (a) sample without crack and (b) sample with crack. Figure 8 . Figure 8.Samples used in contrast experiments: (a) sample without crack and (b) sample with crack. Figure 10 . Figure 10.Six phase line trace extracted from each of phase difference of comparison experiment by a sample with crack and no heating. Figure 10 . Figure 10.Six phase line trace extracted from each of phase difference of comparison experiment by a sample with crack and no heating. Figure 10 . Figure 10.Six phase line trace extracted from each of phase difference of comparison experiment by a sample with crack and no heating. Figure 12 . Figure 12.Six phase line trace extracted from each of phase difference of comparison experiment by a no-crack sample and heating. Figure 12 . Figure 12.Six phase line trace extracted from each of phase difference of comparison experiment by a no-crack sample and heating. Table 1 . The heating time and the recorded temperature for each hologram. Table 2 . The heating time and the temperature at each hologram of the sample without a crack. Table 3 . The heating time and the temperature at each hologram of the sample with a crack. Table 2 . The heating time and the temperature at each hologram of the sample without a crack. Table 3 . The heating time and the temperature at each hologram of the sample with a crack.
2019-10-03T09:03:24.542Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "46cc5f84419d3cfed9d0727d0c3c544f231fd7ad", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-433X/5/10/80/pdf?version=1570874954", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46cc5f84419d3cfed9d0727d0c3c544f231fd7ad", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Mathematics" ] }
270641873
pes2o/s2orc
v3-fos-license
GmMYB183, a R2R3-MYB Transcription Factor in Tamba Black Soybean (Glycine max. cv. Tamba), Conferred Aluminum Tolerance in Arabidopsis and Soybean Aluminum (Al) toxicity is one of the environmental stress factors that affects crop growth, development, and productivity. MYB transcription factors play crucial roles in responding to biotic or abiotic stresses. However, the roles of MYB transcription factors in Al tolerance have not been clearly elucidated. Here, we found that GmMYB183, a gene encoding a R2R3 MYB transcription factor, is involved in Al tolerance. Subcellular localization studies revealed that GmMYB183 protein is located in the nucleus, cytoplasm and cell membrane. Overexpression of GmMYB183 in Arabidopsis and soybean hairy roots enhanced plant tolerance towards Al stress compared to the wild type, with higher citrate secretion and less Al accumulation. Furthermore, we showed that GmMYB183 binds the GmMATE75 gene promoter encoding for a plasma-membrane-localized citrate transporter. Through a dual-luciferase reporter system and yeast one hybrid, the GmMYB183 protein was shown to directly activate the transcription of GmMATE75. Furthermore, the expression of GmMATE75 may depend on phosphorylation of Ser36 residues in GmMYB183 and two MYB sites in P3 segment of the GmMATE75 promoter. In conclusion, GmMYB183 conferred Al tolerance by promoting the secretion of citrate, which provides a scientific basis for further elucidating the mechanism of plant Al resistance. Introduction Aluminum (Al), the most abundant metallic element in the Earth's crust, is solubilized into a toxic trivalent cation (Al 3+ ) in acidic soils with a pH value of less than 5.0.This solubilization inhibits plant root growth, blocking nutrients and water intake, resulting in severe losses in crop production [1,2].Approximately 50% of the world's potential arable land is acidic, and soil acidification is increasing due to industrial pollution and modern farming practices [3,4].Therefore, Al toxicity presents a huge threat to agricultural production and productivity. Transcription factors such as zinc finger proteins, WRKY, HD-ZIP, or NAC are known to play crucial roles in Al tolerance.One example is STOP1/ART1, a zinc finger transcription factor, which regulates the expression of various genes related to Al tolerance in different crop species, such as Arabidopsis (Arabidopsis thaliana) [27], cotton (Gossypium hirsutum) [28], pigeonpea (Cajanus caja) [29], rice (Oryza sativa) [30], sorghum, soybean [31] or tobacco (Nicotiana tabacum) [32].Although the transcription of STOP1 is unaffected by Al stress, its activities at the post-transcriptional or post-translational levels are regulated by Al stress [33,34].The SUMOylation of STOP1 has been found to be involved in modulating Al tolerance, suggesting that post-translational modifications of transcription factors are crucial in Al stress responses [35].Similarly, WRKY22 has been shown to enhance Al tolerance by increasing the expression of OsFRDL4 and promoting citrate secretion in rice [36].Another transcription factor, HvHOX9, a member of the HD-ZIP family, mediates Al resistance in barley roots by inhibiting Al binding to the root cell wall and increasing the apoplastic pH [37].Furthermore, VuNAR1, a NAC transcription factor, promotes Al tolerance by activating WAK1 expression and regulating cell wall pectin metabolism [38].However, the exact roles of the MYB family of genes in Al tolerance have not been fully elucidated. In plants, the MYB family of transcription factors is considered one of the largest families and plays significant roles in regulating gene transcription networks involved in various developmental and stress-response mechanisms [39].A study conducted by Wei et al. demonstrated that the overexpression of TaMYB344 in tobacco confers tolerance to heat, drought, and salt-induced stress [40].Similarly, Shin et al. found that StMYB1R-1 activates genes associated with drought tolerance, leading to improved drought resistance in potatoes [41].Additionally, the gene ARS1, which encodes the R1-MYB type transcription factor, is induced by salinity and contributes to salt tolerance in tomato leaves [42].Similarly, GsMYB7 has been identified as a potential regulator of soybean's tolerance to acidic aluminum stress through the regulation of downstream genes [43].Hence, MYB transcription factors are believed to play crucial roles in plants' response to Al stress. Tamba black soybean (TBS) is an Al-tolerant genotype with significant potential for Al tolerance via the secretion of citrate under Al stress (Figure S1).Previously, we studied the effect of GmMYB183 on the Al stress response in TBS.We observed significant phosphorylation shifts, including upregulated phosphorylation of a serine residue (Ser36) [44].In this study, our objective was to further investigate the function of GmMYB183 in the Al stress response.Our findings revealed that overexpression of GmMYB183 leads to increased tolerance to Al in transgenic Arabidopsis and soybean hairy roots.Additionally, we discovered that GmMYB183 directly binds to the promoter of GmMATE75, resulting in enhanced expression of this gene.In addition, GmMYB183 that binds to the MYB sites in the P3 segment of the GmMATE75 promoter may depend on phosphorylation of Ser36 residues in GmMYB183. GmMYB183 Gene Isolation and Sequence Analysis Based on the previous analysis of the quantitative phosphoproteomics of TBS, we observed that, under acidic aluminum exposure, GmMYB183 was hyperphosphorylated at Ser36.We then obtained the GmMYB183 gene sequence data from Phytozome 12 database, using a gene ID, Glyma.06G187600.The specific primers of GmMYB183 amplification were designed from the full-length cDNA, which were used to clone the gene from TBS roots using RT-PCR.The primers in this study are shown in Table S1. We used the ExPaSy platform (https://www.expasy.org/,accessed on 24 February 2020) to predict physical and chemical properties of the protein.Localization of the GmMYB183 gene was predicted using NetNES 1.1, and then the BLAST tool was used to search for GmMYB183 homologous proteins in the NCBI database (https:// blast.ncbi.nlm.nih.gov/Blast.cgi,accessed on 24 February 2020).Conserved domains and the three-dimensional structure of the GmMYB183 protein were predicted using the Conserved Domain Search (https://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi, accessed on 24 February 2020) and PDB software (http://www.sbg.bio.ic.ac.uk/phyre2 /html/page.cgi?id=index, accessed on 24 February 2020).In addition, DNAMAN was used to perform homology studies of the GmMYB183 protein.Then, the MEGA 7.0 software was used to construct phylogenetic trees through the neighbor-joining method with 1000 bootstrap replications. RNA Extraction and Quantitative Real-Time PCR (qRT-qPCR) Total RNA was extracted using the RNAi so Plus kit (Takara, Shiga, Japan) according to the manufacturer's instructions.First-strand cDNA was synthesized using Prime Script TM reagent Kit with a gDNA Eraser kit (Takara, Shiga, Japan).Quantitative real-time PCR (RT-qPCR) was performed according to our previous study with 40SrRNA gene (GenBank: XM_003549836.4)as an internal control [44].Primers used in this study are presented in Table S1. Vector Construction and Transformation of GmMYB183 into Arabidopsis and Soybean Hairy Roots Encoding region of GmMYB183 cDNA was ligated into a pMD™ 19-T vector (Takara, Shiga, Japan).Fragments encoding GmMYB183 were amplified from the pMD™ 19-T-GmMYB183 vector using a pair of specific primers with terminal Xba I and Sma I restriction sites (Table S1).Moreover, the coding region of the GmMYB183 gene was mutated using Mut Express ® II Fast Mutagenesis Kit V2 (Vazyme, Nanjing, China) according to the manufacturer's instructions.Expression vectors, pBI121-GmMYB183-eGFP (OE) and pBI121-GmMYB183-S36A-eGFP (OE-m), were created by inserting the coding region of the GmMYB183 gene into the same restriction sites under the control of the CaMV 35S promoter and NOS terminator of the expression box.Then, pBI121-eGFP, pBI121-GmMYB183-eGFP and pBI121-GmMYB183-S36A-eGFP were introduced into the Agrobacterium tumefaciens strain GV3101 through the heat shock method.Full-flowering Arabidopsis plants with fruit pods and flowers were used for genetic transformation through the floral dip method.The surface of Arabidopsis seeds was disinfected with 7% sodium hypochlorite for 10 min, washed with sterile water for three times, then spread on the medium of 1/2 MS (containing 500 mg/L Cefixime).After germination, the Arabidopsis seedlings with green fluorescence were screened out using an LUYOR-3415RG hand-held fluorescent protein excitation light source (as indicated by the arrow in Figure S2).The expression of Gm-MYB183 was further detected by RT-qPCR.The seeds of T0 generation Arabidopsis were disinfected and cultured on 1/2 MS medium.The single-copy T1 generation of transgenic Arabidopsis thaliana was screened according to the ratio of transgenic/wild type = 3:1.Among them, three single-copy lines of OE-1, OE-3 and OE-7 were screened in GmMYB183 transgenic lines, and three single-copy lines of OE-m2, OE-m5 and OE-m6 were screened in GmMYB183-S36A transgenic lines.Homozygous T2 and T3 generations were further screened for subsequent studies. The full-length coding sequence of GmMYB183, amplified from the pMD™ 19-T-GmMYB183 vector, was sub-cloned into the pBin35S-Red3 vector between EcoR I and Xho I restriction sites, downstream of the 35S promoter to generate a GmMYB183 overexpression vector (OX).The empty pBin35S-Red3 vector was used as the negative control (EV).Sitedirected mutations were established as above.The GmMYB183 overexpressing vector and the empty pBin35S-Red3 vector were introduced into the Agrobacterium rhizogenes strain K599 for transformation of black soybean hairy roots.Soybeans germinated in the soil for 4-5 days, and samples were taken before cotyledon was fully developed.The hypocotyls with cotyledon were cut 1 cm away from cotyledon and used as explants.The cotyledon was infected in the prepared infection solution for 1 h, and the positive hair roots were observed and screened after 14 days of moisturized culture. Subcellular Localization of the GmMYB183 Protein For subcellular localization, Arabidopsis roots were mounted on glass slides, covered and viewed using an inverted LSM800 laser scanning microscope (Carl Zeiss, Oberkochen Germany).The empty pBI121-eGFP vector was used as the negative control. Phenotypic Identification of GmMYB183 Transgenic Lines The Arabidopsis seeds were surface sterilized with 7% sodium hypochlorite for 15 min, washed three times in sterile water, and then incubated in 1/2 MS agar medium.For relative root growth, the seedlings, after germination for 3 d, were transplanted onto a 0, 50, 100, and 150 µmol/L AlCl 3 solution (containing 0.5 mmol/L CaCl 2 , pH 4.5) for 7 d.The root relative growth was evaluated according to the procedures described by Min et al. [45].We then determined citrate secretion and hematoxylin staining according to our previous work [46].All treatments were performed in triplicates, and each replicate contained 3 plants. Yeast Two-Hybrid and Yeast One-Hybrid Assays To evaluate the interaction between the GmMYB183 and 14-3-3 protein (GmSGFa), yeast two-hybrid assays were carried out using AH109 yeast strain, according to the manufacturer's instructions.The GmSGFa gene was cloned into the pGADT7 vector to generate the pGADT7-GmSGFa construct, while the GmMYB183 or GmMYB183-S36A was cloned into pGBKT7 vector to generate the pGBKT7-GmMYB183 or pGBKT7-GmMYB183-S36A construct.The AH109 yeast strain was co-transformed with different construct combinations and incubated in a medium lacking Trp (T), Leu (L), His (H), and Adenine (A), but with 10 mmol/L 3-AT. On the other hand, to evaluate the binding of GmMYB183 to the GmMATE75 promoter, yeast one-hybrid assays were performed according to the procedures described by Yu et al. [47].Briefly, the full-length GmMYB183 gene was ligated between the EcoRI and XhoI sites and fused in frame in yeast GAL4-AD effector plasmids.The four fragments of the GmMATE75 promoter were inserted into p178 vector to generate reporter plasmids.Subsequently, the different plasmid combinations were co-transformed into EGY48 yeast strain, and the interactions were tested on an SD medium lacking Trp and Ura, for 2~3 days, at 28 • C. Positive clones were incubated on X-gal plates with filter paper for 8 h at 28 • C. Dual-LUC Assays Dual-LUC assays were performed to quantify the binding affinity of the GmMYB183 protein to the GmMATE75 gene promoter.The ORFs of GmMYB183 were ligated into Xba I and Sma I restriction sites of pGreenII 62-SK vector to generate 35S-GmMYB183 effector plasmids.On the other hand, the GmMATE75 promoter was cloned into pGreen 0800-Luc at the Xho I and Hind III restriction sites to generate proGmMATE75-Luc reporter plasmids.The 35S-GmMYB183 vector was mutated by PCR to generate 35S-GmMYB183-S36A, while proGmMATE75-Luc vector was mutated by PCR to generate proGmMATE75-m1-Luc, proGmMATE75-m2-Luc and proGmMATE75-m1m2-Luc.The reporter and effector vectors were transformed into Agrobacterium tumefaciens strain GV3101 (pSoup).We then conducted infiltration in tobacco leaves and detection following the described protocols [48]. Statistical Analysis Data for the relative expression, root growth, citrate secretion, and LUC/REN were presented as a mean ± the standard error of the mean (SEM).The data were processed using SPSS Statistics19 using Duncan's test.We used GraphPad (Version 8.3.0) for graph preparation and presentation.A p < 0.05 was considered to be statistically significant. In Silico Analysis of the GmMYB183 Gene Based on data from quantitative phosphoproteomics of TBS, a protein with significantly up-regulated phosphorylation induced by Al stress was screened [44].The full-length coding sequences (CDSs) of GmMYB183 were amplified using cDNA from TBS.The sequencing results showed that the CDS of GmMYB183 is 885 bp, encoding 294 amino acids, with a MW of 31.87 kDa.The gene sequence was consistent with the sequence of soybean genomic MYB183 (NM_001249070.1).Furthermore, NetNES 1.1 analysis showed that the nuclear export signal was located at the C-terminal of 257-263 amino acids (Figure S3a), and CDD analysis revealed that a SANT conserved the domain of 77-126 amino acids in the GmMYB183 protein, with an E value of 1.30 × 10 −14 (Figure S3b).Prediction of the GmMYB183 secondary structure by SOPMA showed that α-helix accounted for 21.77%, extension chain accounted for 20.75%, β-corner accounted for 6.46%, and irregular crimping accounted for 51.02% (Figure S3c).In addition, the three-dimensional (3d) structure was successfully established by PDB software (Figure S3d). Homology analysis of GmMYB183 amino acid sequences revealed a typical SANT MYB domain, with high similarities in the N-terminal DNA-binding domain, as well as in the C-terminal sequence (Figure 1).In addition, evolutionary relatedness, as evaluated by MEGA7.0,revealed the highest homology (92.83%) between the GmMYB183 of TBS and soybean GmMYB143, and were clustered into one branch with MYB transcription factors of other legumes (Figure S4). Expression Profiles of GmMYB183 in Tissues and Pertubations by Aluminum Stress To evaluate the expression shifts of the TBS GmMYB183 gene under Al stress, we performed qRT-PCR analysis.Expression levels of GmMYB183 were found to be significantly suppressed within 6-24 h treatment of 50 µmol/L Al 3+ in root tips, suggesting that GmMYB183 may be involved in the Al stress response (Figure 2a).Furthermore, expression levels of GmMYB183 in stems and leaves were found to be elevated when compared to those in the roots (Figure 2b).However, there were no significant differences in the expression levels in roots, stems and leaves compared with the control after Al stress for 24 h (Figure 2b). Subcellular Localization of the GmMYB183 Protein To evaluate the localization of the GmMYB183 protein, a full-length GmMYB183 gene was fused to the N-terminal of the eGFP (enhanced green fluorescent protein) reporter gene under a CaMV35S promoter.The eGFP fluorescence signals were analyzed in Arabidopsis root cells via Agrobacterium-mediated transformation.It was found that 35S-eGFP control and GmMYB183-eGFP were localized in both the nucleus and cytoplasmic membrane (Figure 3a).Furthermore, for the mutated Ser36 GmMYB183-S36A-eGFP, the signal was predominantly in the nucleus (Figure 3a). Previous studies have shown that 14-3-3 proteins regulate the subcellular localization of phosphorylated proteins [49].In addition, the Al stimuli enhanced the interaction between 14-3-3 protein and phosphorylated plasma membrane H + -ATPase, thereby promoting citrate secretion [50,51].Furthermore, our previous data showed that Al stress significantly increased the expression of the 14-3-3a gene (GmSGFa) in TBS roots.We spec-ulated that, through interaction with GmSGFa, GmMYB183 might change its subcellular localization.The interactions of GmMYB183 with GmMYB183-S36A and GmSGFa were studied using the yeast two-hybrid system, and as expected, the GmMYB183 protein was shown to interact with GmSGFa, but not GmMYB183-S36A (Figure 3b).S2.S2. sion levels of GmMYB183 in stems and leaves were found to be elevated when compare to those in the roots (Figure 2b).However, there were no significant differences in th expression levels in roots, stems and leaves compared with the control after Al stress fo 24 h (Figure 2b). Subcellular Localization of the GmMYB183 Protein To evaluate the localization of the GmMYB183 protein, a full-length GmMYB183 gen was fused to the N-terminal of the eGFP (enhanced green fluorescent protein) reporte gene under a CaMV35S promoter.The eGFP fluorescence signals were analyzed in Ara bidopsis root cells via Agrobacterium-mediated transformation.It was found that 35S eGFP control and GmMYB183-eGFP were localized in both the nucleus and cytoplasm membrane (Figure 3a).Furthermore, for the mutated Ser36 GmMYB183-S36A-eGFP, th signal was predominantly in the nucleus (Figure 3a). Previous studies have shown that 14-3-3 proteins regulate the subcellular localizatio of phosphorylated proteins [49].In addition, the Al stimuli enhanced the interaction be tween 14-3-3 protein and phosphorylated plasma membrane H + -ATPase, thereby promo ing citrate secretion [50,51].Furthermore, our previous data showed that Al stress signifi cantly increased the expression of the 14-3-3a gene (GmSGFa) in TBS roots.We speculate that, through interaction with GmSGFa, GmMYB183 might change its subcellular local zation.The interactions of GmMYB183 with GmMYB183-S36A and GmSGFa were studie using the yeast two-hybrid system, and as expected, the GmMYB183 protein was show to interact with GmSGFa, but not GmMYB183-S36A (Figure 3b). Phenotypic Identification of GmMYB183 Transgenic Arabidopsis According to the above results, we speculate that GmMYB183 plays an important role in the regulation of the tolerance to Al stress.To evaluate the role of GmMYB183 in Al tolerance, six homozygous T3 transgenic Arabidopsis lines were carefully chosen for subsequent phenotypic and physiological analysis (Figure S5).Relative root growth is one Phenotypic Identification of GmMYB183 Transgenic Arabidopsis According to the above results, we speculate that GmMYB183 plays an important role in the regulation of the tolerance to Al stress.To evaluate the role of GmMYB183 in Al tolerance, six homozygous T3 transgenic Arabidopsis lines were carefully chosen for subsequent phenotypic and physiological analysis (Figure S5).Relative root growth is one of the most important indices for evaluating Al tolerance in plants [52].Exposure to Al stress exerted a concentration-dependent inhibition of Arabidopsis root growth (Figure 4a).The root growth of transgenic Arabidopsis plants was diminished.In addition, under Al stress, the root growth of transgenic plants overexpressing GmMYB183-S36A were shorter than those of WT plants.However, unlike WT plants, transgenic plants overexpressing GmMYB183 exhibited relatively longer root growth under Al stress (Figure 4b).Moreover, compared to WT, upon hematoxylin staining, transgenic plant roots overexpressing GmMYB183 or GmMYB183-S36A showed lighter or deeper staining, respectively (Figure 4c).In addition, citrate secretions in transgenic plants overexpressing GmMYB183 were significantly higher compared to those of WT (Figure 4d), consistent with the expression of the citrate transporter gene (Figure S6).These findings imply that overexpression of GmMYB183 enhanced citrate secretion to alleviate Al toxicity in Arabidopsis. Overexpression of GmMYB183 in Soybean Hairy Roots Confers Al Tolerance To further evaluate the role of GmMYB183 in Al stress responses, 35S::Red and 35S::GmMYB183::Red constructs were introduced into soybean hairy roots (Figures S7 and S8) [53].The hematoxylin staining degree of the OX root tip was lighter than that of Overexpression of GmMYB183 in Soybean Hairy Roots Confers Al Tolerance To further evaluate the role of GmMYB183 in Al stress responses, 35S::Red and 35S::GmMYB183::Red constructs were introduced into soybean hairy roots (Figures S7 and S8) [53].The hematoxylin staining degree of the OX root tip was lighter than that of EV (Figure 5a).Furthermore, citrate secretion from the OX root tip was found to be significantly higher than those of EV (Figure 5b), consistent with findings from Arabidopsis.In addition, under Al stress, expression levels of Al-tolerance genes such as GmMATE75 in OX were significantly higher than that in EV (Figure 5c), implying that GmMYB183 enhanced citrate secretion by promoting GmMATE75 expression. GmMYB183 Binds to the GmMATE75 Promoter GmMATE75, a gene encoding the citrate transporter, has been shown to enhance Al tolerance through Al-induced citrate efflux [23].Using the 5′ rapid amplification of the cDNA ends (RACE) technique, we identified a transcriptional initiation site of the GmMATE75 gene, and analyzed MYB binding sites in the GmMATE75 gene promoter using PlantCare software (http://bioinformatics.psb.ugent.be/webtools/plantcare/html/,accessed on 19 April 2019) (Figure S9).Interactions between GmMYB183 and the GmMATE75 gene promoter were tested using the double luciferase detection system (Figure 6a).It was found that GmMYB183 exhibited much higher activity of the luciferase reporter gene compared to the empty vector control (Figure 6b).The yeast one-hybrid (Y1H) assay was also used to determine whether GmMYB183 binds MYB regions of the GmMATE75 promoter (Figure 6c).However, the yeast strain, EGY48, co-transformed with pB42AD-GmMYB183 and P3-reporter was the only one that showed the blue color (Figure 6d).Therefore, GmMYB183 positively regulates GmMATE75 by binding the P3 segment of the GmMATE75 promoter. GmMYB183 Binds to the GmMATE75 Promoter GmMATE75, a gene encoding the citrate transporter, has been shown to enhance Al tolerance through Al-induced citrate efflux [23].Using the 5 ′ rapid amplification of the cDNA ends (RACE) technique, we identified a transcriptional initiation site of the GmMATE75 gene, and analyzed MYB binding sites in the GmMATE75 gene promoter using PlantCare software (http://bioinformatics.psb.ugent.be/webtools/plantcare/html/,accessed on 19 April 2019) (Figure S9).Interactions between GmMYB183 and the GmMATE75 gene promoter were tested using the double luciferase detection system (Figure 6a).It was found that GmMYB183 exhibited much higher activity of the luciferase reporter gene compared to the empty vector control (Figure 6b).The yeast one-hybrid (Y1H) assay was also used to determine whether GmMYB183 binds MYB regions of the GmMATE75 promoter (Figure 6c).However, the yeast strain, EGY48, co-transformed with pB42AD-GmMYB183 and P3-reporter was the only one that showed the blue color (Figure 6d).Therefore, GmMYB183 positively regulates GmMATE75 by binding the P3 segment of the GmMATE75 promoter.To further analyze the transcriptional regulation of GmMATE75 by GmMYB183, we designed mutation primers Mut-proGmMATE75-1 or Mut-proGmMATE75-2 to mutate two MYB sites in the P3 segment of the GmMATE75 promoter, and generated promoter-Luc reporter constructs (pGreenII 0800-proGmMATE75-m1-Luc, pGreenII 0800-proGmMATE75-m2-Luc and pGreenII 0800-proGmMATE75-m1m2-Luc) (Figure 7a).Double luciferase activity detection revealed that GmMYB183 activates the expression of GmMATE75, and the expression is a function of the two MYB sites (Figure 7b). GmMYB183 Regulates GmMATE75-Dependent S36 Phosphorylation To investigate the importance of Ser36 phosphorylation in the GmMYB183-specific regulation of GmMATE75, Ser36 was mutated to alanine after which the activity of the luciferase gene was analyzed using the double luciferase detection system (Figure 8a).Whereas GmMYB183 promoted the activity of the GmMATE75 promoter, the mutated GmMYB183-S36A did not (Figure 8b).These data show that GmMYB183 activity may depend on the phosphorylation of its Ser36 in regulating the expression of GmMATE75. GmMYB183 Regulates GmMATE75-Dependent S36 Phosphorylation To investigate the importance of Ser36 phosphorylation in the GmMYB183-specific regulation of GmMATE75, Ser36 was mutated to alanine after which the activity of the luciferase gene was analyzed using the double luciferase detection system (Figure 8a).Whereas GmMYB183 promoted the activity of the GmMATE75 promoter, the mutated GmMYB183-S36A did not (Figure 8b).These data show that GmMYB183 activity may depend on the phosphorylation of its Ser36 in regulating the expression of GmMATE75. GmMYB183 Regulates GmMATE75-Dependent S36 Phosphorylation To investigate the importance of Ser36 phosphorylation in the GmMYB183-specific regulation of GmMATE75, Ser36 was mutated to alanine after which the activity of the luciferase gene was analyzed using the double luciferase detection system (Figure 8a).Whereas GmMYB183 promoted the activity of the GmMATE75 promoter, the mutated GmMYB183-S36A did not (Figure 8b).These data show that GmMYB183 activity may depend on the phosphorylation of its Ser36 in regulating the expression of GmMATE75. Discussion Al toxicity is a crucial factor that significantly restrains plant growth and crop productivity in acidic soils [1].Recent studies have shed light on the regulation of Al-tolerancerelated genes by transcription factors [54][55][56][57][58]. Nonetheless, research exploring the posttranslational modification of transcription factors in response to Al stress is scarce.Remarkable shifts in phosphorylation have been observed in GmMYB183 in response to Al stress, without any alterations in the transcript levels of GmMYB183 [44].Interestingly, the phosphorylation of ALR1, a recently discovered Al receptor, is also regulated by Al stress [59].With increasing Al concentration, the phosphorylation of ALR1-BAK1 increased, similar to GmMYB183, which may partially explain the concentration-dependent release of organic acids from roots under Al stress.In addition, Al stress induces the kinase activity of MPK4, which interacts with and phosphorylates STOP1 [60].Hence, it is more probable that GmMYB183 contributes to Al tolerance through their phosphorylation activities.However, the mechanism by which ALR1 regulates GmMYB183 remains unclear.Further studies are necessary to provide a scientific basis for elucidating the mechanism of plant Al tolerance. Hematoxylin staining has been employed to assess the levels of stress tolerance in soybean.For instance, it was observed that roots overexpressing the GmME1 or GsGIS3 genes displayed minimal hematoxylin staining compared to WT, following exposure to Al stress, indicating a robust resistance to Al [61,62].In this study, faint staining of the GmMYB183-OE root tips in both Arabidopsis and soybean hairy roots was consistent with the relative growth of roots in Arabidopsis plants (Figure 4).Regarding further conditions, transgenic GmMYB183 plants exhibited significantly higher levels of citrate secretion compared to the WT under Al stress (Figure 5b).These findings demonstrate that overexpression of GmMYB183 enhances citrate secretion, thereby conferring Al tolerance in Arabidopsis and soybean hairy roots.In addition, citrate secretion could potentially alter the apoplastic pH of the root environment, impacting the availability of nutrients and the activity of enzymes involved in root growth.It is worth mentioning that citrate secretion can improve the utilization capacity of soil phosphorus and reduce the use of phosphorus fertilizer in acidic soil.Therefore, the role of GmMYB183 under abiotic stress is also worthy of our investigation.MATE transporters play multiple roles in plants including detoxification, secondary metabolite transport, Al tolerance, and disease resistance.To investigate the potential molecular regulatory mechanisms of GmMYB183 under Al stress, we profiled several Al-tolerance-related genes in WT or transgenic lines.RT-qPCR analysis revealed that, under Al stress, GmMATE75 was upregulated in GmMYB183-OX root tips, which might be responsible for improved Al tolerance in transgenic plants (Figure 5c).GmMATE75, a plasma-membrane-localized citrate transporter, mediates Al-induced citrate efflux in soybean root apices [23].Our previous plasma membrane proteomics analysis showed that, under Al stress, GmMATE75 is upregulated in Al-tolerant TBS [63].In this study, we found that GmMYB183 binds the promoter of soybean GmMATE75 genes.Similarly, Li et al. reported that OsWRKY22 activates OsFRDL4 expression and enhances citrate secretion, thereby promoting Al tolerance in rice [36].These findings suggest that GmMYB183 functions as a regulator of GmMATE75 activity in Al tolerance.In addition, Arabidopsis MATE transporter 30 (AtDTX30) regulates auxin homeostasis in roots, influencing root development and Al tolerance [64].The dtx30 mutants show reduced elongation in primary roots, root hairs, and lateral roots, which is similar to GmMYB183-OE roots.The altered root architecture could be an adaptive response to metal stress, including Al stress, as root hairs can secrete citrate and create a rhizosphere environment that reduces Al toxicity.Moreover, citrate has been implicated in hormonal signaling pathways, particularly those related to auxin and ethylene, which play crucial roles in root development [65].Therefore, we hypothesized that GmMYB183 modulates hormone levels in roots to regulate root development and promote Al tolerance. The phosphorylation of transcription factors plays a crucial role in the regulation of gene expression and functions [71].In plants, the ABA-dependent multisite phosphorylation of transcription activator AREB1 is essential for its own activity and the expression of ABA-inducible genes [72].Freezing tolerance in Arabidopsis requires the MAPK6mediated phosphorylation of the transcriptional repressor MYB15 [73].Additionally, the phosphorylation of WRKY33 and ERF6 by MPK3/MPK6 activates defense-related genes and enhances the resistance against Botrytis cinerea [74,75].Similarly, rhizobia inoculation in soybean inhibits the phosphorylation of GmMYB183 at Ser61 and enhances tolerance to salinity [76].This study reveals that GmMYB183 regulates the expression of GmMATE75, potentially associated with the phosphorylation of GmMYB183 at Ser36, leading to the promotion of citrate secretion.Furthermore, the regulation of GmMATE75 expression by GmMYB183 depends on two binding sites on the GmMATE75 promoter (Figure 9).These findings highlight the involvement of GmMYB183 in response to different abiotic stresses through the phosphorylation of different sites.Therefore, evaluating the potential functions of GmMYB183-S36A under Al stress and identifying its upstream kinase of GmMYB183 and association with Al receptors are crucial, which provide valuable insights for the development of Al-tolerant crop varieties, contributing to sustainable agriculture and food security. Conclusions In this study, we identified GmMYB183, a R2R3 MYB transcription factor in TBS, is involved in Al tolerance.Overexpression of GmMYB183 in Arabidopsis and soybean hairy roots enhanced plant tolerance to Al stress compared to the wild type, with higher citrate secretion and less Al accumulation.Furthermore, using a dual-luciferase reporter system and yeast one hybrid, the GmMYB183 protein was shown to directly activate the transcription of GmMATE75.These results indicated that GmMYB183 is responsible for Al de- Conclusions In this study, we identified GmMYB183, a R2R3 MYB transcription factor in TBS, is involved in Al tolerance.Overexpression of GmMYB183 in Arabidopsis and soybean hairy roots enhanced plant tolerance to Al stress compared to the wild type, with higher citrate secretion and less Al accumulation.Furthermore, using a dual-luciferase reporter system and yeast one hybrid, the GmMYB183 protein was shown to directly activate the transcription of GmMATE75.These results indicated that GmMYB183 is responsible for Al detoxification by promoting the secretion of citrate, providing valuable insight into the genetic basis for further elucidating the mechanism for improving plant tolerance to Al stress in acid soils. 1 Figure 1 . Figure 1.Homologous alignment analysis of GmMYB183.Red arrow represents phosphorylatio sites and the underscore presents the SANT domain.The accession number of each protein is listed in TableS2. Figure 1 . Figure 1.Homologous alignment analysis of GmMYB183.Red arrow represents phosphorylation sites and the underscore presents the SANT domain.The accession number of each protein is listed in TableS2. Figure 2 . Figure 2. Expression patterns of GmMYB183 in tissues and under Al 3+ treatment.(a) Temporal e pression pattern of GmMYB183 in root tips under Al 3+ treatment.Seedlings were exposed to 5 µmol/L AlCl3, and the 2 cm long roots were taken from the seedlings at the processing time node of 0, 3, 6, 9, 12, 24, 48 and 72 h.(b) Tissue expression patterns of GmMYB183.Samples of roots, stem and leaves are from seedlings treated with 0 or 50 µmol/L AlCl3 for 24 h.Column bars represen means ± standard errors (n = 3).Different letters represent significant differences (Duncan, p < 0.05 Figure 2 . 19 Figure 3 . Figure 2. Expression patterns of GmMYB183 in tissues and under Al 3+ treatment.(a) Temporal expression pattern of GmMYB183 in root tips under Al 3+ treatment.Seedlings were exposed to 50 µmol/L AlCl 3 , and the 2 cm long roots were taken from the seedlings at the processing time nodes of 0, 3, 6, 9, 12, 24, 48 and 72 h.(b) Tissue expression patterns of GmMYB183.Samples of roots, stems, and leaves are from seedlings treated with 0 or 50 µmol/L AlCl 3 for 24 h.Column bars represent means ± standard errors (n = 3).Different letters represent significant differences (Duncan, p < 0.05).Biomolecules 2024, 14, x FOR PEER REVIEW 8 of 19 Figure 3 . Figure 3. Subcellular localization analysis of the GmMYB183 protein in Arabidopsis root cells.(a) Subcellular localization of the GmMYB183 and GmMYB183-S36A protein.(b) Evaluation of how GmMYB183 or GmMYB183-S36A interacts with GmSGFa using yeast two hybrid. Biomolecules 2024 , 14, x FOR PEER REVIEW 9 of Biomolecules 2024 , 19 Figure 5 . Figure 5. Overexpression of the GmMYB183 gene improves Al tolerance in soybean hairy roots.(a) Hematoxylin staining of hairy roots after 50 µmol/L AlCl3 treatment (scale bar = 1 mm).(b) Citrate secretion of hairy roots treated with various AlCl3 concentrations for 24 h.(c) Relative expression of GmMATE75 in hairy roots.EV: empty vector; OX: overexpression of GmMYB183 in soybean hairy root.Bars on columns represent means ± standard errors (n = 3).Different letters represent significant differences (Duncan, p < 0.05). Figure 5 . Figure 5. Overexpression of the GmMYB183 gene improves Al tolerance in soybean hairy roots.(a) Hematoxylin staining of hairy roots after 50 µmol/L AlCl 3 treatment (scale bar = 1 mm).(b) Citrate secretion of hairy roots treated with various AlCl 3 concentrations for 24 h.(c) Relative expression of GmMATE75 in hairy roots.EV: empty vector; OX: overexpression of GmMYB183 in soybean hairy root.Bars on columns represent means ± standard errors (n = 3).Different letters represent significant differences (Duncan, p < 0.05). Figure 7 . Figure 7. Dual-luciferase analysis of GmMYB183 and promoters of the GmMATE75 gene, without or with mutation sites.(a) Sequences of GmMATE75 gene promoters without or with mutation site.(b) Dual-luciferase assays between GmMYB183 and P3 segments of the GmMATE75 promoter.LUC, Firefly luciferase activity; REN, Renilla luciferase activity.Columns and bars represent means ± standard errors (n = 3).* above columns represent a significant difference (Duncan, p < 0.05). Biomolecules 2024 , 19 Figure 7 . Figure 7. Dual-luciferase analysis of GmMYB183 and promoters of the GmMATE75 gene, without or with mutation sites.(a) Sequences of GmMATE75 gene promoters without or with mutation site.(b) Dual-luciferase assays between GmMYB183 and P3 segments of the GmMATE75 promoter.LUC, Firefly luciferase activity; REN, Renilla luciferase activity.Columns and bars represent means ± standard errors (n = 3).* above columns represent a significant difference (Duncan, p < 0.05). Figure 9 . Figure 9. Schematic presentation of transcriptional regulation of MATE by GmMYB183 transcription factor.In response to Al stress, Al 3+ interacts with a receptor (ALR1) on the plasma membrane to initiate a signaling pathway.Al signal phosphorylates GmMYB183 through an unknown pathway.Furthermore, the ALR1-BAK1 complex inhibits F-box protein (RAE1) activity, thereby preventing the degradation of STOP1.Both STOP1 and GmMYB183 activate the expression of the MATE gene, promoting the secretion of citrate and chelating Al ions. Figure 9 . Figure 9. Schematic presentation of transcriptional regulation of MATE by GmMYB183 transcription factor.In response to Al stress, Al 3+ interacts with a receptor (ALR1) on the plasma membrane to initiate a signaling pathway.Al signal phosphorylates GmMYB183 through an unknown pathway.Furthermore, the ALR1-BAK1 complex inhibits F-box protein (RAE1) activity, thereby preventing the degradation of STOP1.Both STOP1 and GmMYB183 activate the expression of the MATE gene, promoting the secretion of citrate and chelating Al ions. Figure S5: Relative expression level of GmMYB183 in Arabidopsis. Figure S6: Relative expression level of At-MATE in transgenic Arabidopsis.Figure S7: Different stages of the soybean hairy root transformation. Figure S8: Relative expression level of GmMYB183 in hairy roots. Funding: This research was funded by the National 973 Project of China (2014CB138701) and Peacock Program of Shenzhen (KQTD2017-032715165926). Institutional Review Board Statement: Ethical review and approval were waived for this study because it did not involve humans or animals.Informed Consent Statement: Not applicable.Data Availability Statement: Data are contained within the article or Supplementary Material. Table S1 : The primers used in this study.Table S2: The accession number of each protein in homologous alignment analysis.Conceptualization, Y.W. and R.H.; writing-original draft preparation, Y.W. and R.H.; Writing-review and editing, Y.W. and R.H.; formal analysis, Y.W. and R.H.; funding acquisition, Y.Y. and R.H.All authors have read and agreed to the published version of the manuscript.
2024-06-22T15:13:04.329Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "f1616eab0517441c0d2c1d06375ebee5a46ffaf1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/14/6/724/pdf?version=1718788084", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a423272b5a88353fe92ffa7ff202afcba32e385", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
218617292
pes2o/s2orc
v3-fos-license
Role of Extrinsic Apoptotic Signaling Pathway during Definitive Erythropoiesis in Normal Patients and in Patients with β-Thalassemia. Apoptosis is a process of programmed cell death which has an important role in tissue homeostasis and in the control of organism development. Here, we focus on information concerning the role of the extrinsic apoptotic pathway in the control of human erythropoiesis. We discuss the role of tumor necrosis factor α (TNFα), tumor necrosis factor ligand superfamily member 6 (FasL), tumor necrosis factor-related apoptosis-inducing (TRAIL) and caspases in normal erythroid maturation. We also attempt to initiate a discussion on the observations that mature erythrocytes contain most components of the receptor-dependent apoptotic pathway. Finally, we point to the role of the extrinsic apoptotic pathway in ineffective erythropoiesis of different types of β-thalassemia. Introduction Erythrocytes have a life span of approximately 120 days, at the end of which they become senescent and are removed from the circulation. Premature removal of red blood cells occurs by eryptosis, a form of stress-induced, programmed cell death which leads to the removal of defective cells without the release of the cell content. During this process, changes similar to apoptosis, such as cell shrinkage, membrane blebbing (release of small membrane vesicles) and exposure of phosphatidylserine on the cell membrane surface can be observed [1,2]. Eryptosis is a strictly controlled process involving receptors, ion channels and a wide range of signaling molecules (mostly kinases) and mediators leading to the above-mentioned morphological changes of the erythrocyte followed by its phagocytosis by macrophages. One of the main events of eryptosis is an increase in cytosolic Ca 2+ concentration, which is mainly a consequence of Ca 2+ entry through prostaglandin E2 (PGE 2 )-activated and erythropoietin-inhibited non-selective cation channels [3,4]. An increased Ca 2+ ion concentration activates the Gardos channel, which facilitates K + efflux, resulting in cell dehydration and shrinkage [5]. Further consequences of elevated Ca 2+ content are loss of membrane phospholipid asymmetry ("scrambling") and calpain activation, which is responsible for proteolysis of cell membrane skeleton proteins and membrane loss via blebbing [6]. The scrambling of membrane phospholipid asymmetric distribution and cell shrinkage caused by Ca 2+ is enhanced by ceramide arising from sphingomyelin hydrolysis, which is another stimulus of eryptosis. Another important on D38 residue generating truncated form, P18. Phosphorylation of this form by casein kinases 1 (CK1) leads to erythroblasts differentiation. In turn, inhibition of CK1 may induce apoptosis [25]. This review focuses on the impact of the extrinsic apoptotic pathway on normal human erythropoiesis. Similarities between eryptosis and apoptosis prompted studies aiming at the evaluation of whether erythrocytes possess apoptotic machinery similar to nucleated cells. Additionally, we discussed the influence of the extrinsic, receptor-dependent apoptotic pathway on ineffective erythropoiesis in patients with β-thalassemia. Receptor-Dependent Apoptotic Pathway The TNFRSF consists of TNF receptor-associated factor (TRAF)-interacting receptors, decoy receptors (DcRs) and death receptors (DRs). The DRs can further be classified according to the molecular mechanisms of apoptosis and necroptosis activation: TNF receptor superfamily member 6 (Fas), TNF-related apoptosis-inducing ligand receptor 1 (TRAILR1) and TNF-related apoptosis-inducing ligand receptor 2 (TRAILR2) belong to the first subgroup of the DRs, which on the cytosolic plasma membrane surface, directly interact with an adapter protein, Fas-associated protein with death domain (FADD) and tumor necrosis factor α (TNF-α) receptor 1 (TNFR1/p55), which belongs to the second subgroup of DRs that act directly with TNFR1-associated protein with death domain (TRADD) [26]. Schemes of mentioned above pathways are shown in Figure 1. 10 on D38 residue generating truncated form, P18. Phosphorylation of this form by casein kinases 1 (CK1) leads to erythroblasts differentiation. In turn, inhibition of CK1 may induce apoptosis [25]. This review focuses on the impact of the extrinsic apoptotic pathway on normal human erythropoiesis. Similarities between eryptosis and apoptosis prompted studies aiming at the evaluation of whether erythrocytes possess apoptotic machinery similar to nucleated cells. Additionally, we discussed the influence of the extrinsic, receptor-dependent apoptotic pathway on ineffective erythropoiesis in patients with β-thalassemia. Receptor-Dependent Apoptotic Pathway The TNFRSF consists of TNF receptor-associated factor (TRAF)-interacting receptors, decoy receptors (DcRs) and death receptors (DRs). The DRs can further be classified according to the molecular mechanisms of apoptosis and necroptosis activation: TNF receptor superfamily member 6 (Fas), TNF-related apoptosis-inducing ligand receptor 1 (TRAILR1) and TNF-related apoptosisinducing ligand receptor 2 (TRAILR2) belong to the first subgroup of the DRs, which on the cytosolic plasma membrane surface, directly interact with an adapter protein, Fas-associated protein with death domain (FADD) and tumor necrosis factor α (TNF-α) receptor 1 (TNFR1/p55), which belongs to the second subgroup of DRs that act directly with TNFR1-associated protein with death domain (TRADD) [26]. Schemes of mentioned above pathways are shown in Figure 1. Tumor necrosis factor (TNF) receptor superfamily member 6/tumor necrosis factor-related apoptosis-inducing (Fas/TRAIL) mediated apoptotic pathway leads to the formation of Complex I/death-inducing signaling complex (DISC) which consists of Fas-associated protein with death domain (FADD), procaspase 8/10 and cellular FLICE (FADD-like IL-1β-converting enzyme) inhibitory proteins (cFLIP). (c) Binding of tumor necrosis factor α (TNFα) to TNF-α receptor 1 (TNFR1) receptor causes the formation of Complex I (TNFR1-associated protein with death domain (TRADD), TNFR1, TNF receptor-associated factor 2 (TRAF2), TNFα-related receptor interacting protein (RIP), which in the next stage either activates the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) pathway or transforms into Complex II (TRADD, TNFR1, TRAF2, RIP, FADD, procaspase-8, procaspase-10). The DRs are characterized by an extracellular N-terminal domain with cysteine-rich domains (CRDs), a membrane-spanning region and a C-terminal, an intracellular domain which harbors a death domain (DD). While DRs share the CRDs with all members of the TNFRSF, the DD is only found in DRs. The CRDs contain the N-terminal pre-ligand assembly domain (PLAD) which is involved in receptor-receptor interactions and is involved in binding ligands of the TNF superfamily (TNFSF) [27,28]. The PLAD domains' interactions cause spontaneous receptor dimerization and/or Tumor necrosis factor (TNF) receptor superfamily member 6/tumor necrosis factor-related apoptosis-inducing (Fas/TRAIL) mediated apoptotic pathway leads to the formation of Complex I/death-inducing signaling complex (DISC) which consists of Fas-associated protein with death domain (FADD), procaspase 8/10 and cellular FLICE (FADD-like IL-1β-converting enzyme) inhibitory proteins (cFLIP). (c) Binding of tumor necrosis factor α (TNFα) to TNF-α receptor 1 (TNFR1) receptor causes the formation of Complex I (TNFR1-associated protein with death domain (TRADD), TNFR1, TNF receptor-associated factor 2 (TRAF2), TNFα-related receptor interacting protein (RIP), which in the next stage either activates the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) pathway or transforms into Complex II (TRADD, TNFR1, TRAF2, RIP, FADD, procaspase-8, procaspase-10). The DRs are characterized by an extracellular N-terminal domain with cysteine-rich domains (CRDs), a membrane-spanning region and a C-terminal, an intracellular domain which harbors a death domain (DD). While DRs share the CRDs with all members of the TNFRSF, the DD is only found in DRs. The CRDs contain the N-terminal pre-ligand assembly domain (PLAD) which is involved in receptor-receptor interactions and is involved in binding ligands of the TNF superfamily (TNFSF) [27,28]. The PLAD domains' interactions cause spontaneous receptor dimerization and/or trimerization. In the case of CD95/Fas and TRAILR death receptors, PLAD-PLAD interaction has a high affinity, in contrast to the TNF1 receptor [29]. Death ligands exist in the cell membrane as trimeric type II transmembrane proteins belonging to the TNFSF. They contain a conserved C-terminal TNF homology domain (THD), an N-terminal intracellular proline-rich domain (PRD) and a stalk region. THD binds to CRDs of the TNFRSF. Membrane-bound death ligands can be processed in the stalk region by metalloproteases, which leads to the release of soluble molecules. The soluble death ligands also own THD and, in consequence, have the capacity to react with TNFRSF receptors [27]. According to the sequential model of TNFRSF receptor activation, a single TNFRSF receptor interacts with a TNFSF ligand trimer and forms a cell surface-bound TNFRSF receptor-TNFSF ligand 3 complex. Next, in two subsequent stages, two additional monomeric TNFRSF receptors are added to create an active TNFSF ligand 3 -TNFRSF receptor 3 cluster which is necessary for induction of death-inducing signaling complexes [29]. The binding of Fas with tumor necrosis factor ligand superfamily member 6 (FasL) leads to formation of the DISC, also termed complex I, consisting of FADD, procaspase-8, procaspase-10 and cellular FLICE (FADD-like IL-1β-converting enzyme) inhibitory proteins (c-FLIPs). The formation of DISC leads to activation of caspase-8, caspase-10 and in consequence to cleavage and activation of effector caspase-3 and caspase-7. Additionally, in some cell lines, a second cytosolic complex is formed upon ligand stimulation. This complex, called complex II, composed of FADD, procaspase-8 and c-FLIPs, might amplify caspase activation by processing caspase-3 ( Figure 1a) [30]. Upon tumor necrosis factor-related apoptosis-inducing (TRAIL) stimulation, the TRAIL receptor forms membrane-associated DISC/complex I in a similar way to the Fas receptor and secondary complex, similarly to above-mentioned TNFR1 complex II (Figure 1b) [30]. Signaling through TNFR1 leads to the formation of complex I and then complex II. Complex I consists of TRADD, TNFR1, TNF receptor-associated factor 2 (TRAF2) and tumor necrosis factor α (TNFα)-related receptor interacting protein (RIP). Complex I assembles rapidly following TNF-α stimulation, and activates the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) signal transduction pathway, which activates survival signals. In the next step, complex I dissociates from the death receptor and binds FADD and procaspase-8 (complex II), the activation of which leads to an induction of apoptosis ( Figure 1c) (reviewed in [30]). It is worth noting that the activation of procaspase-8 at DISC is driven by death effector domain (DED) "chains", which include procaspase-8, procaspase-10 and c-FLIP. The number of procaspase-8 molecules in DED chains is 10 times higher than the number of procaspase-10 and c-FLIP molecules. The local increase of procaspase-8 concentration in DISC causes dimerization of procaspase-8. Then, procaspase-8 dimers (also called p55/p53) are converted by autocatalytic two-step transprocessing into mature heterotetrameric caspase-8 (p18 2 /p10 2 ) while the DISC-bound caspase-8 prodomain can be replaced by a new procaspase-8 molecule. The termination of the DED chain's elongation depends on its stability and association/dissociation rates of procaspase-8 to the chain ( Figure 2) [26,31,32]. Among the most important regulators of the receptor-dependent apoptosis pathway are c-FLIP isoforms, which are major antiapoptotic proteins. Three isoforms can be distinguished as: short c-FLIP S , Raji c-FLIP R and long c-FLIP L . All c-FLIP isoforms possess two N-terminal DED domains. c-FLIP S and c-FLIP R are truncated version of procaspase-8, whereas c-FLIP L retains a full length procaspase-8 chain in which the catalytic cysteine residue within the active site is missing and thus has no proteolytic activity [32]. The isoforms of c-FLIP can heterodimerize with procaspase-8 via a co-operative and hierarchical binding mechanism. The result of c-FLIP S or c-FLIP R homo-or heterodimerization with procaspase-8 is prevention of DED-mediated caspase-8 oligomerization and inhibition of its activation. The ratio of c-FLIP L to procaspase-8 in c-FLIP L -procaspase-8 heterodimers determines whether procaspase-8 will be activated or inhibited. At physiologically-low levels, c-FLIP L -procaspase-8 heterodimer activates procaspase-8. In contrast, high levels of c-FLIP L block procaspase-8 oligomerization, which results in inhibition of caspase cascade activation ( Figure 3) [32,33]. 8 molecules in DED chains is 10 times higher than the number of procaspase-10 and c-FLIP molecules. The local increase of procaspase-8 concentration in DISC causes dimerization of procaspase-8. Then, procaspase-8 dimers (also called p55/p53) are converted by autocatalytic two-step transprocessing into mature heterotetrameric caspase-8 (p182/p102) while the DISC-bound caspase-8 prodomain can be replaced by a new procaspase-8 molecule. The termination of the DED chain's elongation depends on its stability and association/dissociation rates of procaspase-8 to the chain ( Figure 2) [26,31,32]. Among the most important regulators of the receptor-dependent apoptosis pathway are c-FLIP isoforms, which are major antiapoptotic proteins. Three isoforms can be distinguished as: short c-FLIPS, Raji c-FLIPR and long c-FLIPL. All c-FLIP isoforms possess two N-terminal DED domains. c-FLIPS and c-FLIPR are truncated version of procaspase-8, whereas c-FLIPL retains a full length procaspase-8 chain in which the catalytic cysteine residue within the active site is missing and thus has no proteolytic activity [32]. The isoforms of c-FLIP can heterodimerize with procaspase-8 via a co-operative and hierarchical binding mechanism. The result of c-FLIPS or c-FLIPR homo-or heterodimerization with procaspase-8 is prevention of DED-mediated caspase-8 oligomerization and inhibition of its activation. The ratio of c-FLIPL to procaspase-8 in c-FLIPL-procaspase-8 heterodimers determines whether procaspase-8 will be activated or inhibited. At physiologically-low levels, c-FLIPL-procaspase-8 heterodimer activates procaspase-8. In contrast, high levels of c-FLIPL block procaspase-8 oligomerization, which results in inhibition of caspase cascade activation ( Figure 3) [32,33]. Other regulators of apoptosis are inhibitors of apoptosis (IAPs). The mammalian IAPs, X-linked IAP (XIAP), cellular inhibitor of apoptosis protein 1 (cIAP1) and cellular inhibitor of apoptosis protein 2 (cIAP2) contain three baculovirus IAP repeat (BIR) domains which mediate protein-protein interactions, UB-associated domain (UBA), which is responsible for interaction with ubiquitylated proteins, and the "really interesting new gene" (RING) domain conferring E3-ubiquitin ligase activity. Additionally, cIAP1 and cIAP2 have the caspase recruitment domain (CARD), which has the ability to inhibit their E3 ligase activity [34]. The BIR1 domain of XIAP binds the TGFβ-activated kinase 1 (TAK1)-binding protein 1 (TAB1) [35], whereas cIAPs' BIR1 domain binds TRAF1 and TRAF2 [36]. In turn, the XIAP BIR2 domain is involved in inhibition of effector caspase-3 and caspase-7, while BIR3 binds to caspase-9 and prevents dimerization of this enzyme [37]. cIAP1 and cIAP2 can bind caspases through BIR2 and BIR3 domains, but do not inhibit them [38]. XIAP is able to induce ubiquitination of active caspases at the K48 residue, causing their (proteasomal) degradation. Additionally, XIAP is involved in covalent tagging of caspase-7 with ubiquitin-like protein NEDD8 Other regulators of apoptosis are inhibitors of apoptosis (IAPs). The mammalian IAPs, X-linked IAP (XIAP), cellular inhibitor of apoptosis protein 1 (cIAP1) and cellular inhibitor of apoptosis protein 2 (cIAP2) contain three baculovirus IAP repeat (BIR) domains which mediate protein-protein interactions, UB-associated domain (UBA), which is responsible for interaction with ubiquitylated proteins, and the "really interesting new gene" (RING) domain conferring E3-ubiquitin ligase activity. Additionally, cIAP1 and cIAP2 have the caspase recruitment domain (CARD), which has the ability to inhibit their E3 ligase activity [34]. The BIR1 domain of XIAP binds the TGFβ-activated kinase 1 (TAK1)-binding protein 1 (TAB1) [35], whereas cIAPs' BIR1 domain binds TRAF1 and TRAF2 [36]. In turn, the XIAP BIR2 domain is involved in inhibition of effector caspase-3 and caspase-7, while BIR3 binds to caspase-9 and prevents dimerization of this enzyme [37]. cIAP1 and cIAP2 can bind caspases through BIR2 and BIR3 domains, but do not inhibit them [38]. XIAP is able to induce ubiquitination of active caspases at the K48 residue, causing their (proteasomal) degradation. Additionally, XIAP is involved in covalent tagging of caspase-7 with ubiquitin-like protein NEDD8 leading to inactivation of this caspase [39]. cIAP1 and cIAP2 are able to ubiquinate caspase-3 and caspase-7 [40,41] but only cIAP1 targets them for proteasomal degradation [40]. The linear ubiquitin chain assembly complex (LUBAC) is another regulator of cell death. LUBAC is the only known E3 ligase complex for linear ubiquitination, which consists of ring finger protein 31 (RNF31, also known as HOIP), HOIL-1 and Shank-associated RH domain-interacting protein (sharpin) [42]. Recent studies showed that sharpin has a regulatory function in NF-κB and apoptotic signaling pathways. Absence of sharpin attenuates TNFα-mediated NF-κB activation [43][44][45]. Cleavage of RNF31 by caspase-3 and caspase-6 suppresses activity of LUBAC in the NF-κB signaling pathway. This process weakens the inhibitory role of NF-κB in death signaling, leading to the sensitization of cells to cell death signals [46]. On the other hand, another study showed that RNF31 limits caspase-8 activity in complex I and complex II in TRAIL signaling, which causes inhibition of apoptosis [47,48]. Erythropoiesis In humans, two distinct types of erythroid cell production take place. The first type corresponds to primitive erythropoiesis which occurs in "blood islands" within the yolk sac. The newly formed cells are derived from mesodermal cells. Primitive erythroid cells (EryP) circulating in the early embryo are short-lived (~two days), nucleated, large and express ε-, γ-, αand ζ -globin genes [10,49]. This process can be regulated by the renal cytokine erythropoietin (Epo). Epo has an influence on the primitive human erythroid precursors' survival, rate of terminal maturation and proliferation [50]. Definitive erythropoiesis occurs first in the fetal liver and then in postnatal bone marrow. The adult erythroid lineage starts with multipotent hematopoietic stem cells (HSCs), also called long-term repopulating HSCs (LT-HSCs). The asymmetrical division of LT-HSCs gives rise to short-term repopulating HSCs (ST-HSCs), the asymmetrical division of which is the source of the multipotent progenitor cells (MPP-HSCs). The difference among these cells' characteristics is gradual loss of self-renewal potential, but they sustain the same ability to transform into all blood cell types [51]. Further asymmetrical divisions of MPP-HSCs give slowly proliferating BFU-E and then rapidly dividing CFU-E progenitor cells, which divide three to five times within two to three days. Both BFU-E and CFU-E cells are sensitive to Epo due to the presence of erythropoietin receptor (EpoR) on these cells [52,53]. Continued stimulation with Epo triggers differentiation into erythroblast precursors: ProE, each of which undergoes three to four rounds of mitosis giving sequentially two basophilic erythroblasts (BasoE), four PolyE, and eight orthochromatic erythroblasts (OrthoE). The maturation processes of these cells are characterized by progressive expansion, accumulation of hemoglobin, decrease in RNA content, increase in chromatin density and decrease in cell size [50]. OrthoE transforms into two cell forms: a pyrenocyte (plasma membrane-coated extruded nucleus, which undergoes phagocytosis by macrophage) and a reticulocyte, which is a stage resulting from enucleation [54] which changes into a mature erythrocyte [8]. Reticulocytes and erythrocytes stop expressing EpoR [55]. Role of TNF-α in Normal Erythropoiesis Extensive studies have been devoted to the effects of TNF-α on erythropoiesis. A common feature of these works is that they all show that TNF-α exerted a direct or indirect inhibitory effect on erythroid maturation. One of the first studies showed that the indirect inhibition of human CFU-E colony formation by TNF-α was an effect of interferon β (IFNβ) released by bone marrow accessory cells [58,59]. On the other hand, a direct effect of TNF-α on erythroid progenitors has been described. Namely, TNF-α inhibited erythropoiesis stimulated by Epo alone or in combination with interleukin 3 (IL3) [60,61]. Further study demonstrated that TNF-α can downregulate BFU-E colony formation stimulated by stem cell factor (SCF) with Epo or IL9 with Epo [62]. The same authors showed that TNF-α induced inhibition of BFU-E colony formation is caused by TNFR1/p55. On the other hand, TNF-α receptor 2 (TNFR2/p75) is involved in the inhibition of progenitors' responses to Epo alone [62,63], which somehow contradicts the fact that TNFR2 does not have a death domain in its cytoplasmic tail [64]. While TNFR1 occurrence was observed only during the initial stages of erythropoiesis, TNFR2 encoding gene (TNFR2) expression could be detected during all stages of erythroid differentiation. The variable occurrence of TNFRs on erythroid progenitors may suggest a different sensitivity of these cells to TNF-α [9,65] Other studies suggested the existence of negative regulatory feedback of TNF-α within specialized niches-erythroblastic islands. The accumulation of mature erythroblasts that express death ligands may temporarily stop the expansion and differentiation of immature erythroblasts sensitive to death ligands. The interaction between death ligands and death receptors resulted in caspase-mediated degradation of GATA-1 [66,67]. A later study suggested the possible occurrence of negative feedback, in which TNF-α secreted by CFU-Es or ProEs suppresses the maturation of erythroid progenitors [68]. Another study carried out on a heterogeneous cell population constituting a hematopoietic stem and progenitor cells (CD34 + HSPC) culture demonstrated that TNF-α plays an important role in the weakness in balance between GATA-1 and GATA-2, which is important in erythroid lineage development. TNF-α promoted the formation of the GATA-1/PU.1 complex, which blocks the transcriptional activity of GATA-1, consequently inhibiting erythropoiesis in Epo-induced HSPCs [69]. In a dose-dependent manner, TNF-α in combination with interferon γ (INFγ) downregulated growth of purified hematopoietic cells (CD34 + /CD38 − CD34 + /CD38 + ) in vitro and was also found to induce apoptosis of total bone marrow and CD34 + cells [70]. This effect did not require the presence of accessory cells. It seems that TNF-α and INFγ exert a suppressive effect on erythropoiesis in two ways: 1) inhibition of progenitor CD34 + cell proliferation by TNF-α and INFγ receptors' activation-triggered phosphorylation of protein kinase R (PKR) [71] and/or 2) induction of apoptosis, which may cause a reduction in the number of stem and progenitor cells [70]. It should be mentioned that the former mechanism has been implicated in the pathophysiology of bone marrow failure and myelodysplastic (MDS) syndromes [71]. To sum up, it seems that TNF-α mainly exerts an inhibitory effect on erythroid maturation. TNF-α can inhibit erythropoiesis induced by Epo alone, Epo with IL3, Epo with SCF or Epo with IL9. The above-mentioned data indicate that TNF-α acts like negative regulator, which could inhibit maturation of erythroid progenitors and immature erythroblasts or regulate their number via the induction of apoptosis. Fas/FasL in Normal Erythropoiesis There is no detectable level of Fas in CD34 + cells freshly isolated from human bone marrow. Treatment with TNF-α and/or INFγ increases the Fas level on CD34 + cells. The inhibition of erythropoiesis by TNF-α and INFγ might be mediated by the Fas/FasL system [72]. The FasL encoding gene (FASLG) expression has not been identified in purified CD34 + cells, but there are experimental results which showed that stroma-derived factor 1α (SDF-1α) induced FasL-level increases in these cells with no effect on Fas production. It seems that SDF-1α inhibited the development of erythroid cells by functional upregulation of FasL [73]. The expression of the Fas encoding gene (FAS) increases from BFU-E to CFU-E stages of erythroid development, to reach a maximal level in ProE and BasoE. In turn, the FASLG gene is expressed in BFU-E, CFU-E and mature OrthoE. The interaction of FasL localized on mature erythroblasts with Fas present on immature erythroblasts causes induction of apoptosis in immature erythroblasts, while mature erythroblasts bearing Fas are insensitive to FasL [74]. Fas-FasL interaction results in apoptotic degradation of the transcription factors Tal-1 and GATA-1. The degradation of both transcription factors is therefore responsible for the Fas-mediated downregulation of maturation of immature erythroblasts [66,75]. This process strongly depends on the concentration of Epo, since Fas-based cytotoxicity against immature erythroblasts could be abrogated by high doses of Epo [74] upregulating GATA-1 expression, which in turn triggers the expression of the BCL2 gene [76]. At an intermediate level of Epo, cell fate depends on the number of mature erythroblasts in the bone marrow, which means that immature erythroblasts can be arrested in their maturation or enter an apoptosis pathway [77]. On the other hand, the suppression of Fas or caspases by siRNA treatment blocked the erythroid differentiation progress at the stage of ProE, i.e., blocking ProE to BasoE transition. This effect was reversed by FasL but not TRAIL treatment and suggests that caspase activation stimulates the erythroid maturation process [78]. SCF inhibits activation of caspase-8 and caspase-3 without decreasing the level of Fas. SCF prevents Fas-mediated apoptosis of human erythroid colony-forming cells, mainly CFU-E. This signal was found to depend on Src-family kinase [79,80]. Further studies showed that the mechanism of SCF action is based on the increase of FLIP expression [81]. The results of experiments indicate that a high cellular level of FLIP protects human HSPCs from Fas-mediated apoptosis [82]. During erythroid development, various long non-coding RNAs (lncRNAs) were found to regulate erythroid gene expression. Fas-AS1, also known as lncRNA Fas-antisense 1 (Saf), is encoded on the antisense strand of the first intron of the human FAS gene on chromosome 10 [83,84]. During erythropoiesis, Fas-AS1 is upregulated by the erythroid transcription factors GATA-1 and Kruppel-like factor 1 (KLF1) and is negatively regulated by NF-κB. Since the level of Fas-AS1 expression increases during erythroid maturation, it is suggested that the role of Fas lncRNA is to protect developing erythroblasts from Fas-mediated apoptosis via reducing the level of Fas [85,86]. The studies cited above on the role of the Fas/FasL pathway in erythropoiesis showed that death ligands or receptors are present up to the stage of basophilic erythroblasts. Both are involved in the inhibition of erythropoiesis in CD34 + cells and immature erythroblasts. The Fas/FasL system plays a significant role in control of the level and rate of immature erythroblast maturation in an Epo-dependent manner. Role of TRAIL in Normal Erythropoiesis Other death receptors expressed by erythroid cells are TRAILR1 and TRAILR2. In immature cells, the expression level of both receptors is higher than in mature erythroblasts [66]. In turn, TRAILR3 and TRAILR4 are not present on the cell surface during erythroid maturation [87]. TRAIL, which binds TRAILR1 and TRAILR2, is produced only by mature erythroblasts [66]. TRAIL, similarly to TNF-α and FasL, negatively regulates adult erythropoiesis [88]. Moreover, TRAIL was found to be involved in INFγ-mediated inhibition of erythropoiesis [89]. There is evidence that TRAIL negatively affects generation of mature erythroblasts by activation of the ERK1/2 pathway [87]. Another study showed that protein kinase Cε (PKCε) protects Epo-responsive mature erythroblasts against TRAIL-mediated apoptosis by up-regulation of BCL-2 [90,91]. In addition, recent research showed that in the absence of Epo, TRAIL behaves like a pro-survival factor by activating the NF-κB/IκBα pathway [92]. In summary, the TRAIL/TRAILR1 and -R2 system/pathways take part mainly in the negative regulation of erythropoiesis in a similar way as the Fas/FasL pathway or TNF-α. Effect of Caspases on Normal Erythroid Maturation The activation of several caspases seems to be essential in earlier stages of erythroblast differentiation [93]. Proenzymes of caspases 1-3 and 5-9 are present in erythroid cells. The level of procaspases-2, -3, -7 and -8 is the highest in immature erythroblasts [66,94]. Caspase-3 is the best known caspase which is involved in erythroid maturation. The occurrence of activated caspase-3 was observed in cells from late BFU-E [95] to BasoE [93,96]. Caspase-3 inhibition resulted in a reduction in erythroid expansion and differentiation [95]. On the other hand, caspase-3 downregulation had no effect on terminal maturation of erythrocytes, such as nuclear condensation and extrusion [97]. As was mentioned above, caspase activation via the Fas/FasL pathway triggers a positive effect on erythropoiesis; in particular, it is necessary for transition of ProE to BasoE [78]. The fate of erythroid precursors was found to depend on the nuclear localization of chaperone HSP70. During erythropoiesis, in the presence of Epo, HSP70 enters the nucleus and protects GATA1 from caspase cleavage. In the absence of Epo, HSP70 leaves the nucleus and allows for the caspase-3-mediated degradation of GATA-1 [98,99]. The above-mentioned data point to the crucial role of caspase-3 in early stages of erythroid maturation and indicate that active caspase-3 is necessary for transition of ProE to BasoE. Moreover, caspase-3 activation plays a major role in apoptosis of erythroid precursors upon Epo starvation. Mature Erythrocytes and Extrinsic Apoptotic Pathway The absence of an intrinsic apoptotic pathway in mature erythrocyte has an obvious evolutionary connection with the lack of mitochondria. Studies on apoptotic machinery in erythrocytes have clearly indicated that caspase-9, apoptotic protease activating factor 1 (Apaf-1) and CytC were missing, but also showed that red blood cells contain considerable amounts of caspase-3 and caspase-8 [100]. Erythrocytes, except for the above-mentioned caspase, also possess Fas, FasL and FADD, which are vital in the extrinsic apoptotic pathway. All of these components are localized to the detergent-resistant membrane (DRM) fraction of aged erythrocytes and also those which have undergone oxidative stress [101]. Early apoptosis activation study attempts showed that erythrocytes do not enter the path of apoptosis after treatment with staurosporine or when cells were incubated in the absence of serum. Normally these conditions trigger apoptosis in nucleated cells [102]. The next studies confirmed that caspase-8 and caspase-3 cannot be activated by CD95-stimulating antibody [103], etoposide or mitomycin c [100], well known inducers of apoptosis. More recent studies revealed that induction of the Fas-mediated cell death cascade may be a result of reactive oxygen species (ROS) and cholesterol accumulation in the erythrocytes of animals exposed to arsenic derivatives (e.g., sodium arsenite) [104,105]. Another study suggested that chronic lead exposure, which increases generation of OH¯and loss of K + ions, may induce Fas translocation into the membrane DRM fraction and in consequence induction of the extrinsic apoptotic pathway in erythrocytes [106]. It is worth noting that activated caspases can take part in degradation of erythrocyte membrane proteins which are involved in maintenance of cell shape and mechanical properties of the erythrocyte membrane. Caspase-8 is involved in β-spectrin degradation. Experiments performed on erythrocyte ghosts showed that spectrin undergoes proteolytic cleavage among other sites at residue 470, which results in generation of an N-terminal fragment containing the actin binding domain (ABD). This process was shown to depend on protein 4.1 [107]. Activated caspase-3 is able to cleave the N-terminal cytoplasmic domain of human erythroid anion exchanger 1 (AE1/band 3) in aged erythrocytes as a consequence of oxidative stress. This cleavage weakens interaction between AE1 and 4.2 protein, which results in weakened interactions of AE1-based complexes with spectrin [108][109][110][111]. Other studies provided evidence that upon storage of erythrocytes in saline-adenine-glucose-mannitol (SAGM) preservation medium caspase-3 mediates formation of a 24 kDa fragment which is a result of degradation of AE1 protein [112]. Data of other authors showed that caspase-3 is involved in clustering of AE1 protein and recognition of erythrocytes by macrophages [113,114]. Additionally, in erythrocytes, externalization of phosphatidylserine (PS) under oxidative stress is dependent on caspase-3, which seems to be connected to the impairment of flippase activity [115]. The observations that mature erythrocytes contain all components of the extrinsic apoptotic pathway raised many questions; however, common inducers of apoptosis do not activate this pathway. Studies based on an animal model suggested that other stimuli, such as ROS or cholesterol accumulation, could activate the extrinsic apoptotic pathway. The above data also demonstrated important effects of caspases-8 and caspase-3 on red cell membrane and membrane skeleton proteins, which are crucial in determining cell shape and erythrocyte deformability and therefore erythrocyte survival in the circulation. β-Thalassemia β-thalassemia is an autosomal recessive, inherited blood disorder characterized by reduced (β + ) or absent (β 0 ) beta-globin chain, which is the main component of adult hemoglobin. The beta-globin chain is encoded by the HBB gene located on chromosome 11. β-thalassemia is characterized by a decrease in hemoglobin synthesis and reduction of erythrocyte production leading to anemia [116]. β-thalassemia is widespread in the Mediterranean region, Middle East, southeast Asia, and North and Central Africa [117]. Every year, about 68,000 children are born with various β-thalassemia syndromes [118]. The heterogeneity of β-thalassemia is a result of different mutations in the HBB gene [119]. β-thalassemia minor is caused by heterozygous mutation that affects only one allele of the β-globin gene. This type of thalassemia can be inherited by the β + gene and the β 0 gene. Carriers of β-thalassemia minor are usually asymptomatic, but sometimes suffer from mild anemia [120]. In turn, β-thalassemia intermedia results from homozygous or compound heterozygous mutations in HBB gene, i.e., both alleles are affected. Patients with β-thalassemia intermedia show mild to moderate anemia. The most severe type of β-thalassemia is β-thalassemia major, which occurs when both alleles of the β-globin gene have mutations. Among above types of β-thalassemia, only patients with β-thalassemia major need regular blood transfusions [120,121]. Another type of β-thalassemia is β-thalassemia/hemoglobin E, which globally makes up about 50% of all cases of severe β-thalassemia [122]. Hemoglobin E (HbE) is an abnormal hemoglobin which is a result of substitution of a glutamic acid to lysine residue at codon 26 in the HBB gene. This mutation contributes to activation of a splice site, which leads to abnormal messenger RNA processing. HbE is produced at a reduced rate and the patient's symptoms resemble mild β-thalassemia [123]. Patients with β-thalassemia/HbE inherited the β-thalassemia allele from one parent and the structural variant HbE from the other parent [122]. Role of Extrinsic Apoptotic Pathway in Different Types of β-Thalassemia In 1970, ferrokinetic studies showed that 60-80% of thalassemic erythroid precursors die in the bone marrow. For comparison, in normal cells, this value reaches 10-20% [125]. The β-thalassemic bone marrow contains about six times more erythroid precursors than normal [126]. The cause of ineffective erythropoiesis in a β-thalassemia major patient was shown in 1993. The authors indicated that α-globin chain accumulation may result in accelerated apoptosis of thalassemic erythroid precursors in bone marrow [127]. Quantitative analysis showed that apoptotic cell death in the β-thalassemia major patient was four times higher than in a healthy patient [126]. Analyses carried out on patients with moderate to severe forms of β-thalassemia revealed a correlation between ineffective erythropoiesis and apoptosis. Patients with β-thalassemia/HbE had the most ineffective erythroid maturation and the most accelerated programmed cell death, for which α-globin accumulation was responsible [128]. β-thalassemic major bone marrow contains two times more PolyE and one third fewer OrthoE than normal. In vitro studies identified PolyE as a stage in which apoptosis occurs [129]. The β-thalassemic erythroid precursors are phagocytosed with about twice as high intensity as normal precursors. The enhanced phagocytosis is the result of: 1) an increased level of apoptosis-connected externalization of PS, 2) an increased number of macrophages and their activation [130]. Flow cytometry measurements showed that thalassemic erythroid precursors expose on their surface significantly higher levels of Fas or FasL [131] than in those of normal erythroid cells. As in typical cells, in β-thalassemia binding of FasL to Fas leads to activation of procaspase-8 [132]. Phosphoproteome analysis of β-thalassemic HSCs pointed to very high abundance of FasL, tumor necrosis factor receptor superfamily member 12A (TNFRSF12A) [133] and TRAF2 [123]. The presence and the level of the mentioned proteins may explain why freshly isolated β-thalassemic HSCs were characterized by shorter survival periods than those from normal donors [123,133]. In contrast to normal erythrocytes, β-thalassemic red blood cells contain a higher level of activated caspase-3, which, similarly to normal cells in a condition of oxidative stress, cleaves AE1 proteins [108,140]. Another example of the results of caspase activation is the cleavage of GATA-1, which becomes unprotected by HSP70 due to its sequestration in the cytoplasm of β-thalassemic erythroblasts by an excess of free α globin chains [141]. Recent data from the research on exportin-1 (XPO1), which is a regulator of HSP70 localization in normal erythroid progenitors, showed the impact of XPO1 activity on β−thalassemic erythroblasts. Namely, XPO1 inhibitor, KPT-251 recovered GATA-1 expression and improved terminal differentiation of β−thalassemic erythroblasts through an increase in the amount of nuclear HSP-70, indicating the possibility of therapeutic treatment of β-thalassemia [142]. It was hypothesized that the excess of ROS could be a reason for increased apoptosis of β-thalassemic erythroid cells. The ROS generation in β-thalassemic precursor cells is probably a result of accumulation of unmatched α-globin, hemichromes, nonheme iron and free iron, the latter being able to generate ROS in a similar way as a Fenton reagent. However, there is no evidence indicating a direct connection between ROS and apoptosis in β-thalassemic erythroid cells [131,143]. The above-mentioned data strongly suggest that ineffective erythropoiesis in β-thalassemia is caused by an increase in the level of apoptosis. In comparison to normal cells, thalassemic cells are characterized by: (1) very high expression of FasL, TNFRSF12A and TRAF2, (2) PolyE-OrthoE arrest, (3) a higher serum level of TNF-α, IL-1β and IFNγ, (4) a higher level of activated caspase-3, and (5) enhanced phagocytosis. Conclusions Erythropoiesis, and mechanisms regulating this process, have attracted the attention of many scientists in the last two decades. One of the key pathways which is involved in the regulation of erythrocyte production is the extrinsic apoptotic pathway, which plays an important role at various stages of erythropoiesis. Moreover, it is interesting to learn the role, if any, of components of the receptor-dependent apoptotic pathway which are present in mature erythrocytes. Understanding all of these processes may help to explain the pathophysiology of anemia associated, among other things, with β-thalassemia. Despite extensive research, it is still unclear what the direct mechanism of apoptosis of erythroid cells in β-thalassemic patients is. It seems necessary to specify the mechanism of accelerated apoptosis induced by α-globin chain accumulation. Additionally, it would be worthwhile to focus on the role of ROS overabundance in the increased apoptosis of thalassemic cells, probably existing as a result of unmatched α-globin accumulation. Furthermore, it would be particularly important to examine antiapoptotic processes occurring in thalassemia patients. Moreover, we should ask the following question: at what erythropoietic stage it is possible to inhibit increased apoptosis in β-thalassemic cells? The answer to this question should facilitate understanding of the main mechanism of accelerated apoptosis in β-thalassemia patients and perhaps aid the development of new therapeutic strategies. The summary of apoptotic processes maintaining regulatory function is presented in Figure 4. It seems that available data [144][145][146] coming from detailed Next Generation Sequencing (NGS) and mass spectrometry-based studies, facilitating transcriptome and proteome analyses at the single cell level, may provide a research background for studies seeking near definitive answers concerning the interplay between regulatory events resulting from the sequential regulation of expression of particular genes and apoptotic signals from other cells and the cellular environment. facilitate understanding of the main mechanism of accelerated apoptosis in β-thalassemia patients and perhaps aid the development of new therapeutic strategies. The summary of apoptotic processes maintaining regulatory function is presented in Figure 4. It seems that available data [144][145][146] coming from detailed Next Generation Sequencing (NGS) and mass spectrometry-based studies, facilitating transcriptome and proteome analyses at the single cell level, may provide a research background for studies seeking near definitive answers concerning the interplay between regulatory events resulting from the sequential regulation of expression of particular genes and apoptotic signals from other cells and the cellular environment. Conflicts of Interest: The authors declare no conflict of interest. The funder had no role in the writing of the manuscript, or in the decision to publish the results. Conflicts of Interest: The authors declare no conflict of interest. The funder had no role in the writing of the manuscript, or in the decision to publish the results. TGFβ-activated kinase 1 (TAK1)-binding protein 1 THD TNF homology domain TNFR1 TNF-α receptor 1 TNFR2 TNF-α receptor 2 TNFRSF tumor necrosis factor receptor superfamily TNFSF TNF superfamily TNFα tumor necrosis factor α TRADD TNFR1-associated protein with death domain TRAF TNF receptor-associated factor TRAILR1/TRAILR2 TNF-related apoptosis-inducing ligand receptor 1 and 2 UBA UB-associated domain XIAP X-linked IAP
2020-05-14T13:03:13.779Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "88d584406bfc53d69b46a5ce986044f8d55738d3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms21093325", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41c20ef6c9de0f0c630a4f1f13d25166626b4f04", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235719964
pes2o/s2orc
v3-fos-license
Long-term temperature and sea-level rise stabilization before and beyond 2100: Estimating the additional climate mitigation contribution from China’s recent 2060 carbon neutrality pledge As the largest emitter in the world, China recently pledged to reach a carbon peak before 2030 and carbon neutrality before 2060, which could accelerate the progress of mitigating negative climate change effects. In this study, we used the Minimum Complexity Earth Simulator and a semi-empirical statistical model to quantify the global mean temperature and sea-level rise (SLR) response under a suite of emission pathways that are constructed to cover various carbon peak and carbon neutrality years in China. The results show that China will require a carbon emission reduction rate of no less than 6%/year and a growth rate of more than 10%/year for carbon capture capacity to achieve carbon neutrality by 2060. Carbon peak years and peak emissions contribute significantly to mitigating climate change in the near term, while carbon neutrality years are more influential in the long term. Mitigation due to recent China’s pledge alone will contribute a 0.16 °C–0.21 °C avoided warming at 2100 and also lessen the cumulative warming above 1.5 °C level. When accompanied by coordinated international efforts to reach global carbon neutrality before 2070, the 2 °C target can be achieved. However, the 1.5 °C target requires additional efforts, such as global scale adoption of negative emission technology for CO2, as well as a deep cut in non-CO2 GHG emissions. Collectively, the efforts of adopting negative emission technolgy and curbing all greenhouse gas emissions will reduce global warming by 0.9 °C −1.2 °C at 2100, and also reduce SLR by 49–59 cm in 2200, compared to a baseline mitigation pathway already aiming at 2 °C. Our findings suggest that while China’s ambitious carbon-neutral pledge contributes to Paris Agreement’s targets, additional major efforts will be needed, such as reaching an earlier and lower CO2 emission peak, developing negative emission technology for CO2, and cutting other non-CO2 GHGs such as N2O, CH4, O3, and HFCs. Introduction The Paris Agreement adopted in 2015 set the goal to hold 'the increase in the global average temperature to well below 2 • C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 • C above pre-industrial levels' (UNFCCC 2017). To achieve this goal, every participating country was required to plan climate action in the form of 'nationally determined contributions' (NDCs). However, even under the current NDC plans, greenhouse gas (GHG) emissions will continue to rise to 56 GtCO 2 eq yr −1 by 2030, which will lead to a global mean temperature rise of 2.6 • C-3.1 • C by the end of the century, potentially even exceeding 4 • C (Fawcett et al 2015, Rogelj et al 2016, Xu and Ramanathan 2017, Wei et al 2018. To meet the 2 • C and 1.5 • C goals of the Paris Agreement, the current NDCs must be boosted with additional GHG emission reductions of 15 GtCO 2 eq yr −1 and 32 GtCO 2 eq yr −1 by 2030, respectively (Höhne et al 2020, Olhoff and Chistensen 2020, Schaeffer et al 2020. Furthermore, it has become clear that reaching carbon neutrality by the mid-century is essential to achieve the goal of 1.5 • C (IPCC 2018). Therefore, several countries have begun to make zero-emission commitments. Developed countries, including those in the European Union (EU), Japan, the Republic of Korea, and Canada, have announced goals of carbon neutrality by 2050 (European Union 2020, Vaughan 2020, Yonhap News 2020, Jinyi 2021. China, the largest developing country and currently the largest carbon emitter, has also recently proposed a plan to reaching carbon neutrality by 2060 (Xinhuanet 2020). Since President Xi's surprising announcement, China has begun promoting top-down planning (Xinhuanet 2021) that includes an overhaul of its energy system associated with massive investment. There are two ways to achieve net-zero carbon emissions: reducing gross emissions and increasing negative emissions (Wang and Zhang 2020). Emission reduction involves several aspects, including major and rapid changes in energy supply, massive low-carbon transitions with the development of the carbon market, changes in consumption end-use for energy saving. Negative emission technologies broadly include land-based solutions, such as enhancing agricultural and forestry carbon sinks through soil management and afforestation, but also carbon capture utilization and storage (CCUS) for industrial facilities, such as biomass energy carbon capture and storage and direct air carbon capture. Among them, CCUS has a large potential emission reduction of approximately 3 GtC yr −1 to 10 GtC yr −1 (Smith et al 2016, Mac Dowell et al 2017, but is still in the early stages with very limited capacity. Specifically, China's CCUS is at about 0.001 GtC in 2020 (Cai and Li 2020). To keep the 1.5 • C goals alive, an accelerated scale-up of CCUS is necessary (van Vuuren et al 2018, IPCC 2018, Jiang 2018, Detz and van der Zwaan 2019, Hanna et al 2021. Because of the high cost of mitigation (particularly the upfront investment), it is worth demonstrating what climate benefits the proposed carbon neutrality commitment from China would contribute, especially given the uncertainty associated with the pathways towards 2060. This was the purpose of this study. In particular, we aim to understand how different carbon peak years (CPYs) and carbon neutrality years (CNYs) will affect the global temperature rise in this century. Although projected temperature rise can be linearly approximated by cumulative CO 2 emissions (Wang et al 2012, Knutti et al 2017, Arora et al 2020, more accurate quantification of the role of all GHG species requires climate model simulations (such as in van Vuuren et al (2011), Ricke and Caldeira (2014), IPCC (2018), Tong et al (2019)). In addition to global mean temperature, sea-level rise (SLR) also has profound adaptation implications. SLR can be estimated using an empirical linear relationship that is mainly tied to historical warming, but also the warming rate during a specific period (Vermeer and Rahmstorf 2009, Rahmstorf et al 2012, Hu et al 2013. For reference, if the global temperature rises by 2 • C, the global sea level will rise 46-55 cm during this century (relative to 2000) and there will be a lower SLR of 40-48 cm if temperatures rise is limited to 1.5 • C (Mengel et al 2016, Rasmussen et al 2018. Overall, it remains unclear whether the newly pledged carbon neutrality commitment could hold temperature rise to below 2 • C or even 1.5 • C and how the corresponding SLR would respond in this and the next century. To assess the climate impact of China's 2060 carbon neutrality pledge, this study designed a series of idealized carbon neutral pathways with different CPYs and CNYs. Then, we used the minimum complexity Earth simulator (MiCES) to estimate the temperature rise, which feeds into a semi-empirical statistical model to project the SLR under the various carbon-neutral pathways. Moreover, the mitigation benefits of new 2060 pledges as well as coordinated international reductions are presented in a comprehensive suite of metrics, including avoided warming, peak warming year, peak warming level, year of reaching 1.5 • C, number of years exceeding 1.5 • C, cumulative warming amounts exceeding 1.5 • C, warming reversal rates towards 2100, and avoided SLR in this and the next century. Uncertainty of carbon neutrality pathways in China The baseline CO 2 emissions for China and the world from 2015 to 2100 were derived from an NDC pathway developed by Huang et al (2020) using a Computable General Equilibrium model with optimized costs. Note this baseline scenario is already a mitigation pathway globally, which aims at holding a temperature rise under 2 • C by 2100. Thus, it should not be confused with a typical 'baseline' in the literature in which there assumes no or weak climate policy in place, such as SSP3-7.0. In this NDC pathway, CO 2 emission in China peaks in 2030 and continues to decline toward 2100, while the global emission peaks in 2040. Here, we further expand the NDC-related pathway to 2200 by holding the same reduction rate as in 2050-2100 (thick black line in figure 1). The carbon naturality pathways in China were constructed to have two distinct components: reducing gross emissions and increasing the CCUS capacity. Emission reduction is represented by a power function of time with a reducing rate r as follows: where Emission is the emission of China in year t (beginning at t peak ), and Emission peak and t peak are the peak emission and corresponding year, respectively. The growth of CCUS is represented as an S-curve function with a growth rate g, and a cap of CCUS limit , as follows: dCCUS dt where CCUS limit is set to be 0.5 GtC yr −1 for China according to the layout proposed by Wei et al (2021). As noted previously, the current CCUS capacity is very low at 0.001 GtC (Cai and Li 2020), and that is the initial condition used in this study. To assess the impact of pathway uncertainty associated with the carbon peaking year (2024-2030) and CNY (2050-2070), a suite of pathways was constructed. The values of r and g for certain CPYs and CNYs are listed in table 1. Because CCUS is limited to 0.5 GtC yr −1 , only a sufficiently large r can lead to eventual carbon neutrality. Thus, the value of r was set to 6.0%/year, which is the minimum to realize carbon neutrality in 2060 or later, and 9.0%/year, which is the minimum to realize carbon neutrality in 2050, as well as faster rates, such as 10.5%/year and 12.0%/year, which were used for sensitivity exploration. The corresponding values of g were calculated based on the given r and assumed CPY in order to reach carbon neutrality at certain years. It is reasonable that a larger r would require a smaller g to achieve carbon neutrality at a certain time (e.g. 2060). International cooperation to reach global carbon neutrality International cooperation is required to mitigate climate change. As previously mentioned, several countries raised their own carbon-neutral targets before or shortly after China. As China is one of the world's major emitters, world emissions partially depend on China's emissions. To correlate the world emission response to China's carbon neutrality pathway obtained in the previous section, we fit the world's emissions with China's emissions using all available SSP pathway emission data (Riahi et al 2017) in the low emission range (China's emissions less than 3.5 GtC yr −1 ) and obtained a quadratic function. The R 2 of the fitness is 0.94 and the root mean square error is 1.45 GtC yr −1 . Therefore, using this function, global emissions (fossil fuel and land use) can be approximately estimated from China's emissions. In addition, global emissions are capped by the baseline that is the NDC pathway as derived from Huang et al (2020). 11.5% 6.0% 14.5% 2024 9.0% 10.0% 12.0% 6.5% 6.0% 15.5% 2065 2027 9.0% 11.0% 12.0% 7.5% 6.0% 16.5% 2030 9.0% 11.5% 12.0% 8.5% 6.0% 11.5% 2024 9.0% 8.0% 12.0% 4.5% 6.0% 12.5% 2070 2027 9.0% 8.5% 12.0% 5.5% 6.0% 13.5% 2030 9.0% 9.5% 12.0% 6.0% Considering the technical limitations and the political will to reduce emissions, we considered four cases of decarbonization pathways from 2015 to 2200: Case A) China-only pathway. The rest of the world stays with the baseline-mitigation emissions, and China's emissions reach carbon neutrality and negative. This is a hypothetical case to quantify the added value of enhanced pledges from China. Case B) Global zero CO 2 pathway. In addition to Case A, the rest of the world also reduces carbon emissions to zero, but without going negative. Case C) Global negative CO 2 pathway. In addition to Case B, the rest of the world reaches negative carbon emissions between 2050 and 2070. Case D) Global zero GHG pathway. This case uses the same carbon emission as Case C, but also considers non-CO 2 GHGs following the RCP4.5 emissions to 2050 (e.g. the peaks of CH 4 and N 2 O are 2030 and 2040, respectively), and then declines linearly to zero in 2100, as opposed to the remaining 2.5 GtCeq in RCP4.5. Calculation of temperature rise The MiCES is used to simulate climate change from 1850 to 2200 caused by global GHG and aerosol emissions (Sanderson et al 2017). It is a simplified climate model based on the energy and carbon budget of the Earth's system. In this model, the Earth system is divided into four parts: land, atmosphere, surface ocean, and deep ocean, wherein carbon and heat transfer between these parts can achieve dynamic equilibrium. In total, the model contains 37 parameters, which can be divided into two categories: heat and carbon transfer parameters and chemical parameters. Herein, we used parameter sets optimized by Chen et al (2020), who investigated the sensitivity of the parameters and found that seven parameters related to heat and carbon transfer were the most sensitive among the 37 parameters. Chen et al (2020) optimized the parameters by fitting the most sensitive parameters with the observed emission and temperature rise within their uncertainty range. For example, the most important parameter, the equilibrium climate sensitivity, starts with the 1 • C-6 • C range and is set to 4 • C in the original model. After the optimization, this parameter is recalibrated to 2.8 • C in the updated model used in this study, which is close to the central values documented in recent IPCC reports. The settings of key parameters are summarized in table 2 for references and reproducibility. Note climate sensitivity is one of the major causes of uncertainty, we use the 2.3 • C−4.7 • C (the 5%-95% bound range) by Sherwood et al (2020) for the uncertainty range calculation. The historical (up to 2014) global CO 2 emissions come from the PRIMAP-hist dataset (Gütschow et al 2016), which combines several published datasets to provide a GHG emission pathway for both globally and individual countries. The historical CO 2 pathway was merged with constructed CO 2 pathways for 2015-2200 (section 2.2). The radiative effect of non-CO 2 GHGs on the climate was included using an individual chemical module, not the carbon cycle model. We considered other non-CO 2 GHGs listed by the Kyoto Protocol (as well as sulfate that represents the aerosol's cooling effects). We adopt non-CO 2 projections in RCP4.5 (www.iiasa.acat/web-apps/tnt/ RcpDb) because the CO 2 emissions in RCP4.5 are the closest to the baseline selected here. These non-CO 2 emissions are considered in all cases. Calculation of SLR The SLR was calculated using a semi-empirical statistical model proposed by Vermeer and Rahmstorf (2009) where T and SLR are the temperature rise and SLR at time t, respectively, and T 0 is the base temperature at which sea level is in equilibrium with climate. The coefficient estimates for the model were based on the fit of the observed global temperature and SLR (NOAA 2020(NOAA , 2021. The 1850-2014 period was considered the historical period used to calibrate the model, generating the optimized coefficients of a = 0.25 cm yr −1 K −1 , b = −0.25 cm K −1 , and T 0 = −0.43 K. Note that these coefficients are slightly different from those used in Hu et al (2013), which may be because of the different observation data used for training. Figure 1 shows emissions of different proposed pathways. China's emission pathway decreased from the Chinese NDC baseline (Huang et al 2020) after the assumed CPYs of 2024, 2027, and 2030, and reached neutrality between 2050 and 2070 ( figure 1(a)). As the CCUS limit was set to 0.5 GtC yr −1 , negative emissions will remain constant when they reach the lower bound. Correspondingly, the world's carbon emissions under three different pathways (figures 1(b)-(d)) have the same peak emissions of 9.3 GtC, 9.5 GtC, and 9.7 GtC in 2024, 2027, and 2030 Under the only China pathway (as a hypothetical case to isolate the contribution from China's recent pledge alone), 123-160 GtC less carbon will be emitted than that of the baseline NDC during this century (2015-2100). However, the world will not achieve net-zero emissions. Conversely, under Case B (global zero CO 2 , figure 1(c)), there is a large decrease in carbon emissions reaching zero around 2050-2070, causing a cumulative difference of 370 GtC to 420 GtC during this century. Under Case C (the negative CO 2 pathway), the emission pathway is the same as in Case B. However, emissions will further decrease to approximately −3 GtC yr −1 after this time (proportional to the imposed cap of 0.5 GtC yr −1 for China), resulting in 30-100 GtC fewer carbon emissions, as compared with Case B (global zero CO 2 ) during this century. Emission pathways Based on the simple scaling approach, we estimate that global carbon neutrality will be achieved 2-9 years after China's carbon neutrality, and also, earlier carbon neutrality in China will result in a shorter delay. For example, China reaching carbon neutrality in 2050 will lead the world to neutrality in 2052; however, if China reaches carbon neutrality in 2070, global carbon neutrality will be delayed to 2075-2079. To illustrate the sensitivity of timing of policy implementation, we also find that the uncertainty of CNYs (i.e. 2050-2070) have a stronger impact on the cumulative emission with a 35 GtC difference under the zero CO 2 pathway and with a 130 GtC difference under the negative CO 2 pathway (within this century). In contrast, the near-term CPYs (i.e. 2024-2030) only caused a 40 GtC difference under the negative CO 2 pathway and a 50 GtC difference under the zero CO 2 pathway. Temperature rise under different pathways Compared with the global baseline (black line in figure 1(b)), China's carbon neutrality will help reduce the global temperature rise from 2.46 • C (uncertainty range 2.06 • C-4.01 • C) to 2.25 • C-2.30 • C (uncertainty range 1.89 • C-3.75 • C) by 2100 ( figure 2(a)). That is, the contribution from China's recent pledge alone will reduce warming by 0.16 • C-0.21 • C, contributing approximately 17%-22% to the Paris Agreement's 1.5 • C goals (relative to the 0.96 • C gaps between the NDC's projection to 1.5 • C). This is consistent with a recent estimate that China achieving carbon neutrality would lower global warming by approximately 0.2 • C-0.3 • C (Climate Action Tracker 2020). A range of benefits comes from the uncertainty associated with the decarbonization pathway, but it can be seen in figure 2(a), an earlier carbon neutralization year would lead to greater avoided warming. With global efforts to achieve carbon neutrality in the 2nd half of this century, all three other cases (global zero CO 2 , globally negative CO 2 , and global zero GHG) can successfully hold the temperature rise below 2 • C (figures 2(b)-(d)), which is consistent with previous studies (Rogelj et al 2015, 2019, Salvia et al 2021. However, stabilizing the temperature below 1.5 • C is more difficult (horizontal dash line in figure 2), which is discussed in detail next. Under the global zero CO 2 pathway (Case B in figure 2(b)), the global temperature would slowly decrease after peaking by 1.64 • C-1.78 • C (uncertainty range 1.37 • C-3.03 • C) around 2087 and would remain nearly constant until the end of the century and beyond. Under this pathway, the warming peak year (around 2087) remains the same for all CNYs. That is because CO 2 emissions will continue to be zero after carbon neutrality, so the warming peak is mostly dependent on the non-CO 2 emission and the sensitivity of the earth system. Meanwhile, under the global negative CO 2 pathway (Case C), global temperature will rise, peaking by 1.6 • C-1.8 • C (uncertainty range 1.35 • C-2.92 • C) between 2062 and 2085, which is approximately 15 years later than the CNY, before it will decrease to a rise of 1.5 • C-1.8 • C (uncertainty range 1.21 • C-2.95 • C) in 2100 at a rate of 0.02-0.05 • C/decade. The temperature overshoot is stronger in Case D, under the zero GHG pathway. Specifically, temperatures will reach peaks earlier (at 2060-2067) by 1.6 • C-1.7 • C (uncertainty range 1.34 • C-2.77 • C), in which the peak time varies with CNY, and may achieve a peak temperature before carbon neutrality as non-CO 2 emissions are lower under the zero GHG pathway. After the peak level, the temperature would then decrease to 1.3 • C-1.6 • C (uncertainty range 1.08 • C-2.71 • C) in 2100 and maintain the declining trend into the 22nd century at a rate of 0.07-0.10 • C/decade. While most of the 15 constructed pathways under the negative CO 2 pathway (Case C) can consistently bend the warming curve past 2100 and eventually bring it back to be under 1.5 • C, to meet the target of limiting the temperature under the 1.5 • C level in 2100, the more ambitious emission ramp-down pathways are required. Among the 15 constructed pathways, only the three most aggressive ones: World's carbon emission peaks in 2024-2027, instead of 2030, and reaches carbon neutrality around 2050; and carbon peaks in 2024 and reaches neutrality before 2058, have the chance to keep the temperature rise under 1.5 • C in 2100 with the aid of coordinated global efforts. We also emphasize that when supplemented with non-CO 2 GHG neutrality before the end of the century (Case D), which is even more ambitious than cutting the non-CO 2 GHG emissions to 1.4 GtCeq as in SSP1-1.9, it is possible to keep the temperature rise under 1.5 • C in 2100 under more lenient carbon neutrality conditions. Specifically, it would require that World reaches carbon neutrality before 2065 and has an emission peak before 2030, or reaches carbon neutrality in 2070 and has an earlier and lower emission peak in 2024 ( figure 2(d)). In contrast, in the absence of sustained netnegative carbon emissions and zero emissions of non-CO 2 GHGs (Case B), even if the world achieves carbon neutrality as early as 2050, the 1.5 • C targets will barely be achieved ( figure 2(b)). This suggests that even with the reduction of human influence, the climate system stays near a steady-state, maintaining the temperature rise for a long time ( figure 2(b)), which is consistent with the results of the Zero Emissions Commitment Model Intercomparison Project, which showed that further temperature rise remains close to zero after reaching carbon neutrality (Jones et al 2019, MacDougall et al 2020. The benefit of pursuing both negative CO 2 emissions and aggressive non-CO 2 GHG cuts does not end in 2100. Figure 2 shows that both the negative CO 2 and zero GHG pathways have cooler temperatures in 2150 than in 2050, while the zero CO 2 pathway remains stable in 2150. Indeed, the projected warming reversal rate in 2100 is 0.02-0.05 • C/decade under the negative CO 2 pathway and 0.07-0.1 • C/decade under the zero GHG pathway, much larger than the 0.003 • C/decade rates in the zero CO 2 pathway (a quasi stabilization). Figure 3 shows the temperature rises in 2050, 2100, and 2150 depending on different CPYs and CNYs. It can be seen that temperature rise would increase with delayed CPYs and CNYs, especially by 2100 ( figure 3). In particular, 20 years earlier in the CNY will lead to a sizable 0.2 • C temperature increase in 2100 under the negative CO 2 and zero GHG pathways and 0.04 • C-0.07 • C under the zero CO 2 pathway. Meanwhile, 6 years of delay in CPY will cause an approximately 0.07 • C-0.09 • C increase in temperature in 2100. This indicates that earlier emission peaks will lead to lower temperature rises, reducing the burden of mitigation later, as suggested by many previous studies (Tong et al 2019, Olhoff andChistensen 2020). In particular, for the near-term period (period to mid-century), CPYs caused a greater difference in temperature rise than CNYs. We find that a 6 year difference of CPY avoided approximately 0.05 • C-0.07 • C warming in 2050, while a 20 year difference of CNY caused a negligible 0.01 • C. However, in the long-term, CPYs and CNYs caused approximately the same difference in temperature rise in 2100 and 2150, realizing an approximately 0.06 • C-0.09 • C difference for the 6 year range of CPY explored here (2024-2030) and a 0.04 • C-0.08 • C difference for the 20 year range of CNY explored here (2050-2070). Furthermore, CNYs show a greater influence of approximately 0.2 • C-0.3 • C in 2150 in both the negative CO 2 pathway and zero GHG pathway than in the zero CO 2 pathway, which is consistent with the results of the cumulative carbon difference caused by different pathways (section 3.1), because earlier neutrality Table 3. Sensitivity of various metrics (years when the warming cross 1.5 • C, years above 1.5 • C, cumulative warming) to the assumed carbon peak year (CPY) and carbon neutrality year (CNY) under cases C and D. The year when temperature cross 1.5 • C Years above 1.5 • C Cumulative warming above 1. implies a larger amount of avoided cumulative emission over time. Even for the pathways that can successfully bend the warming down below 1.5 • C at 2100 or early 22nd century, the number of years exceeding 1.5 • C is another metric to be considered. As in table 3, under both the negative CO 2 case and zero GHG cases, the temperature rise will exceed 1.5 • C at approximately the same time (2043)(2044)(2045)(2046), depending on the CPY. Under the negative CO 2 pathways, there is a 46-104 year period where temperature rise is above 1.5 • C, wherein this period is only 33-68 years under the zero GHG pathway. Further, it can be seen that the CNY has a more influential role, leading to a difference of approximately 40-50 years in years above 1.5 • C under the negative CO 2 pathway and 25 years under the zero GHG pathway. In contrast, CPY only leads to a 15 year difference under the negative CO 2 pathway and a 10 year difference under the zero GHG pathway. Thus, CNYs are the main driver of the number of years exceeding the 1.5 • C. Consistently, for cumulative warming above the 1.5 • C level which can be an analogy to cooling degree days, CNYs are more influential than CPYs, causing an approximately 8 • C·decade difference (between the more aggressive pathways and more relaxed decarbonization pathways), as opposed to a 3 • C·decade difference due to spreads in CPYs. Looking beyond temperature level, we also note that with efforts to reduce non-CO 2 GHG emissions (Case D), the years above 1.5 • C and cumulative warming above 1.5 • C is reduced to only 60%-70% of that under the negative CO 2 pathway (Case C), further justifying the role of curbing the non-CO 2 GHG emission, beyond the well-demonstrated effects in reducing near-term warming rates (Ocko et al 2021). Sea level rise under different pathways While temperature rise can be stabilized at the end of the century, sea level would continue to rise under all pathways. However, different pathways would affect the SLR because SLR is more related to cumulative temperature rise. Our results show that SLR is less sensitive than temperature rise to CPYs and CNYs within the pathways, which may be due to the large heat capacity of the ocean. Specifically, under the baseline pathway, the sea level will rise to 71 cm at the end of the century (relative to 1850) and will rise by 155 cm in 2200 (figure 4). Figure 4 shows the differences caused by the different pathways. Our results (44-45 cm in this century) are between those found by the IPCC (2018) and Rasmussen et al (2018), who stated that the sea level would rise by 40-48 cm under the 1.5 • C pathway. China achieving carbon neutrality between 2050 and 2070 will lead to a 1-2 cm SLR decrease in 2100 and a 6-9 cm decrease in 2200. With global efforts to reach carbon neutrality, SLR decreases can be controlled to more than 5 cm under the other three cases in 2100 but close to 50 cm in 2200. It could also be seen that the difference of SLR from the baseline is more sensitive to CNY than CPY under the negative CO 2 pathway and zero GHG pathway, which is a 10 cm difference caused by CNY and 5 cm by CPY. However, under the zero CO 2 pathway, CPY and CNY cause approximately the same difference. The semi-empirical SLR model used here does not account for potential expected transitions in ice dynamics, which could result in substantially higher SLR (DeConto and Pollard 2016, DeConto et al 2021). Neglecting nonlinear physical processes, feedbacks, and threshold behavior could lead to biased conclusions based on statistical fits. However, statistical modeling of SLR could give cursory yet useful insights into whether SLR can be slowed down, despite land ice dynamics not being fully represented. Conclusion This study constructed a set of idealized China emission pathways with different carbon peak years (CPYs) and carbon neutrality years (CNY). We find that to meet the commitment of carbon neutrality by 2060, an emission reduction rate of no less than 6%/year and a high CCUS growth rate (larger than 10%/year and in some cases as large as 24%/year) are required for China. Moreover, CNYs have a stronger impact than CPYs on the difference in cumulative emissions within this century. In sync with China's recent pledge, we also developed three sets of global emission pathways, including the zero CO 2 pathway, negative CO 2 pathway, and zero GHG pathway. The simple climate model MiCES was used to calculate the temperature rises under each idealized pathway, and the corresponding SLR was calculated using a semi-empirical statistical model. China's carbon-neutral proposal will lead to a cooler climate, as compared with a previous NDC-related baseline (in which emission will be halved from its peak towards the end of the century), by 0.16 • C-0.21 • C at 2100 and by 0.22 • C-0.26 • C in 2150. That suggests that China's pledge for reaching carbon neutrality will additionally contribute at least 17%-22% to the effort of reaching the 1.5 • C global goal. With accompanying global efforts for carbon neutrality around 2070, the global temperature rise will be successfully kept below 2 • C (relative to preindustrial levels). However, more aggressive efforts, such as a broad application of negative emissions technologies (NETs, more than simply offsetting the remaining portion of fossil fuel use) and a deep cut to all other non-CO 2 GHG emissions, are needed to bend the temperature curve down to less than 1.5 • C. Only limited pathways that include NETs (our Case C) can fulfill the 1.5 • C goal at 2100, including a carbon peak in 2024-2027 with carbon neutrality around 2050, or a carbon peak in 2024 with carbon neutrality before 2058. With complete GHG neutrality by the end of the century (our case D), conditions can become more lenient, such as carbon neutrality before 2060 with an emission peak before 2030, and carbon neutrality in 2065 with an emission peak in 2024. Furthermore, the explored ranges in CPYs and CNYs lead to approximately the same difference in temperature rise in 2100 and 2150. Earlier CPYs and CNYs can avoid the warming by 0.04 • C-0.09 • C in 2100 under the zero GHG pathway. Meanwhile, CNYs are the main driver of the difference in the number of years exceeding 1.5 • C and cumulative warming above 1.5 • C. Specifically, the explored range of CNYs can lead to a large 40-50 year difference in years above 1.5 • C and an approximately 8 • C·decade difference in cumulative warming above 1.5 • C, whereas the corresponding range of CPYs causes a 10-15 year difference in years above 1.5 • C and a 3 • C·decade difference in cumulative warming. The coordinated global efforts under the three pathways can hold SLR to under 67 cm in 2100 (compared with the baseline of 71 cm relative to preindustrial). The benefit is much larger in the 22nd century, leading to a 40-51 cm SLR avoidance at 2200 from the baseline pathway under the negative CO 2 pathway, and an even larger differences of 49-59 cm under the zero GHG cases, which is a significant cut from the 130 cm of the projected SLR (since 2020) in the baseline case. Overall, we conclude that while iconic goals of CNYs are crucial, taking immediate actions to reach earlier CPYs and lower emission peaks also contribute greatly to mitigating climate change, especially in the near term. Moreover, we emphasize that it is important to go beyond the net-zero of carbon. The world needs to further scale up the negative emission technology (mindful of economic and geophysical constraints) and to adopt a deep cut of non-CO 2 GHGs in order to reduce the upcoming climate risks in this and next century and eventually bring the planet back to a safe regime. Data availability statement The data that support the findings of this study are available upon reasonable request from the authors.
2021-07-03T20:02:20.455Z
2021-06-18T00:00:00.000
{ "year": 2021, "sha1": "de9d8004ead5cac5bd9b0c42691a612e36c1ebb9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/ac0cac", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "de9d8004ead5cac5bd9b0c42691a612e36c1ebb9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
53446929
pes2o/s2orc
v3-fos-license
Evaluation of Injury Cases for Dental Intervention Described in Legal Dentistry Reports The dentist responsibility on dental interventions during the exercise of his/her activity guides in civil, ethical, administrative and criminal obligations. When a harmful result occurs on a patient whether by recklessness, malpractice and / or negligence, the examination of the injury can be ordered by a judicial authority and held at the expert level, thus making the dentist subject to the Brazilian Penal Code and its penalties. The dentist might be forced to repair the damage and compensate according to the caused consequence, based on the Civil Code, or both and may suffer a double action. With the increase of Dentistry-related processes, the focus of this research is to give greater visibility to the subject, emphasizing the ethical and legal aspects involved in professional practice. To meet this end, we carried out a survey on reports of maxillofacial injuries from the "Instituto Médico Legal Nina Rodrigues", Salvador-BA-Brazil, from January 2007 to December 2013, analyzing the data on the procedures performed, the reason for the expert's report and its result, the professional responsibility and the conclusion given by the expert. It was noticed that from the total of personal injury examinations made by dentists, most of the complaints are in the area of surgery (42.9%), followed by Endodontics and Orthodontics with 14.3% each, and in 96 % of cases involved one or more elements of professional liability and 47.4% were classified by experts as minor injuries. It is concluded that the increase in injuries lawsuits generated in service is due to the fact that the dentist does not take responsibility to protect himself/herself from poor results and to perform procedures without having the proper skill. It is therefore suggested, professional training for the acquisition of technical and scientific knowledge in their area, enabling them to act with utmost in care and professionalism. INTRODUCTION Responsibility is required in the performance of its functions in all professions.In health area, interventions in the patient, even though aiming to rehabilitate, restore or prevent diseases, are subject to adverse effects.Health provider must then assume obligations on the treatment performed in order to protect a predictable result (Garbin et al., 2006a).The failure of a procedure may be due three attitudes: negligence (passivity, omission or adverse behavior to that that should be taken); malpractice (lack of technical or scientific preparation for the conduct) and recklessness (hasty action without caution and inconsequential).The results of these attitudes expose patients to risks that could be avoided by due care during service (Garbin et al., 2006a(Garbin et al., , 2007)). Patients complaints and the search for damages compensation are becoming more frequent.In order to clarify the law, and to cooperate with the judicial labors, problems that are relevant, experts use a set of investigations for the assessment of the caused damages (Peres et al., 2007) .The criminal matters in Dentistry require verification of injury incidents in different tissues and maxillofacial complex structures, * Doctor's degree in Forensic Dentistry.Associate Professor, Universidade Estadual Paulista-UNESP, Araçatuba, Brazil.** Master's degree in Preventive and Social Dentistry Program, Universidade Estadual Paulista-UNESP, Araçatuba, Brazil.*** Doctor's degree in Forensic Dentistry.Adjunct Professor, Universidade Estadual Paulista-UNESP, Araçatuba, Brazil.**** Doctor's degree in Orthodontics.Adjunct Professor, Universidade Estadual Paulista-UNESP, Araçatuba, Brazil.careful record of the treatment becomes critical to support future analyzes aimed at the resolution of legal issues (Garbin et al., 2008). Injury examination constitutes in one of the skills performed by the forensic dentist in the criminal context, within the Forensic Medical Center (Peres et al.).The final product of this expert examination is the report, which is the written and detailed account of all specific facts and permanent, related to a skill.The correct issuance of the report is essential for the proper prosecution of criminal cases, since errors in their description can lead to serious legal flaws (Garbin et al., 2008). The Brazilian Penal Code (BPC) specifies, in Article 129, the crime of bodily injury, which penalties vary according to the results caused by the production of the lesion.The results from dentomaxillar lesions described in BPC usually are associated with theinability to perform usual activities for more than thirty days; permanent debilitation of a member, sense or function; permanent incapacity for work, incurable disease; loss or member destruction, sense or function and permanent deformity (Presidencia da República, Brazil, 1940). Literature is controversial regarding the evaluation of dental injuries.There are scarce discussions on the subject (Peres et al.).Given the increasing number of processes associated with procedures performed by dentists (Garbin et al., 2009), this study aims to analyze the characteristics of descriptions of forensic dentistry reports of injury by professional dentists' action, the liability measured on the professional and the relationship established by official experts in their conclusions, among dental injuries and their results described in Article 129 of the BPC. MATERIAL AND METHOD This is a retrospective, descriptive, quantitative, investigating study on all reports of injuries, a total of 3,600, issued by official forensic dentist experts, from 2007 to 2013 at the "Instituto Médico Legal" Nina Rodrigues -Salvador-BA.The inclusion criteria for this study were: reports describing injuries in professional action, dental character.Exclusion criteria were: reports caused by violence or accident and incomplete reports. After consultation and careful analysis of the reports we extracted relevant information relevant to the process and tabulated as follows: i) Sociodemographic characteristics of the dentist and the patient; ii) Procedure performed by the professional; iii) Damage characterization and the resulting set by the expert according to the BPC and iv) Professional responsibility categorized according to the CCB. Identification and characteristics of professionals and patients have been specified in each analysis: dentist (sex, education and work) and patient (sex, profession, marital status, age, place of birth and residence), in order to know the profile for surgeons and patient complaints of professional conduct. At the conclusion of the expert report, indicating whether or not, resulted in dental injury, according to the interpretation of the expert, the resulting injuries were coded as: LIGHT -lesions that do not cause any of the results described as serious or very serious; GRAVE -inability to perform usual activities for more than 30 days and / or permanent debility of a member, sense or function; VERY GRAVE -permanent deformity and NO FEATURE -for cases not completed by the experts, for not affirming or denying elements if the damages were related to injury presented, as well as for those who were not admitted as personal injury.The description of the injury is related to the characteristic elements of professional activity, according to the civil liability of dentists, founded by the theory of the Civil Code of guilt, derived from negligence, malpractice or recklessness. All terms of this study are in accordance with the required ethical criteria and with due consent of the board of the "Instituto Médico Legal" Nina Rodrigues, Salvador -BA.For data analysis, we used the statistical software Epi Info version 3.5.2. RESULTS After evaluating 3,600 reports, using the inclusion and exclusion expected criteria in the methodology; we selected 35 cases that met the sample criteria. The criminal reports did not expose detailed information about the claimed dentists, however, it was possible to detect that most dentists involved were male (n= 22), as for their specialty, there was a highest percentage for the general practitioner, and his employment was higher in the private sector (Table I). As for the procedures performed, we identified that the Oral and Maxillofacial Surgery is the most affected, totaling 42.9% of cases; followed by Endodontics and Orthodontics with 11 reports each, corresponding to 14.3%; Implant with 11.1%; Prosthodontics with 8.7%; Dentistry with 5.8% and 2.9% with Periodontics (Fig. 1). Regarding the characteristic of the complainants, it was identified that most patients (65.7%) were male; the average age was 35.7 ranging from 10-75 years, with the prevalence of 86% of adults, according to the categories of WHO, with predominant single marital status and employees of private companies.As for the location of the residence, there was the prevalence of peripheral neighborhoods with 63% (Table II). The relationship of one or more pictures of professional responsibility was made with the description of the lesions, as being negligence, recklessness, malpractice or any irregular attitude of professionals.Of the 35 cases, the recklessness element prevailed with 26%, followed by 14% with malpractice and negligence with 11%.Combinations in some cases, we had the presence of figures: malpractice + negligence, recklessness + negligence and malpractice + recklessness with 12% each; a single case who committed three errors as the responsibility (2%); and finally, four cases (11%), which were not judged as an irregularity (Table III).The descriptions of the lesions, paresthesia and tooth loss were significantly expressed in almost 30% of cases.Depending on the specifics of each case, it is possible to identify the lesions affected by dentists relating to tooth loss, excessive wear of tooth substance and paresthesia, have different conclusions in similar cases, the lack of agreement among the experts of the office may be explained by the importance given to the affected region, functional, aesthetic or chewing. Among the injured regions, we identified: molars in 40% of cases, anterior teeth in 12.5%, premolars in 15% and eight other regions resulted in a total of 32.5% of cases.The hard tissues were the areas most affected by the 84% of injuries, consequently, there was no need to categorize the lesions in other tissues (Table IV). As for the damages caused to the patient generated by occupational injury, 22 cases (62.8%) characterize to be damaged; 1 case (2.9%) did not infringe the patient and 12 cases (34.3%) were hampered because according to the experts, there were no elements to affirm or deny the condition found at the time of the forensic examination. The following data were obtained as result of experts interpretations regarding the types of injuries: Minor lesions 45.7%; Serious injury (permanent weakness of member, sense or function or inability to perform usual activities for more than 30 days) 14.3%; threatening injuries (permanent deformity) with 2.9%, cases that have not been categorized as lesion with 3.1%.Those who had their suboptimal outcome, it refers to those that were not open to interpretation by the lack of additional tests or that the collected information on expert examination could not affirm or deny that the lesion corresponded to a trauma caused by professional action or a previous commitment, corresponding to 34% (Fig. 2). DISCUSSION The increase and the frequency of lawsuits against the dentists are related to various problems, such as getting faster care and lack of professional responsibility.Getting to know the professional and patient profiles involved on it becomes important because it allows evaluating the incident factor in the occurrence (Hashemipour et al., 2013;Kiani & Sheikhazadi, 2009;Santos Pacheco et al., 2014). In this survey, it was observed that the male-dentists were the most involved in legal proceedings, a similar situation found in the work of Santos Pacheco et al.The knowledge gained in academic education of the dentist in the dentistry course allows him/her to have science of various specialties, however, he must opt for the higher affinity area by choice.In the profession, according to Article 7 of Law 5,081 / 66 (Presidencia da República, Brazil, 1966), the professional may exercise two specialties, which unfortunately does not happen, many service failure situations. it was found that 77.1% of patients were single, although not a significant data, the interpretation refers to single patients, most looking for aesthetic and health care. In all major cities, the Forensic Medical Institute and the Medical and Dental Councils are sought not only by local patients, but also for others of other regions; these due the lack of structure in the city where they live or their belief in a larger center will have greater support and guarantee their rights.Situation confirmed by the naturalness of the patients, although most are locals, there are many cases of non-regional patients (Hashemipour et al.; Kiani & Sheikhazadi). In this study, 62.9% of the patients lived in the periphery; taking into account that we had the urban center and other region variables, this issue is significant and can be attributed to the large number of popular private practices in the locations that offer low-cost services, but of dubious quality. The Brazilian Civil Code contains rules on the relationship between the interests in general, among these standards, some are of specific character.The art.186da Law 10,406 / 02 provides that dentists are required to repair the damage generated by professional errors led by recklessness, negligence or malpractice injuries.The distribution of dentists by the liability figures were classified and analyzed by the authors, in a quantitative approach, as negligence, recklessness, malpractice or any irregular professional's attitude, of the 35 cases analyzed, the recklessness element prevailed with 26%. In Brazil, the conclusion of the expert report of injury, is classified according to the result described in the Brazilian Penal Code (BPC).This typifies in Article 129, the crime of bodily injury, whose penalties vary according to the results caused by the production of the lesion.Analyzing the classification of lesions with the resulting established by Article 129 and judged at the conclusion in the study expert reports, the light result was the highest with 45.7% of cases, in any order: negligence, recklessness or malpractice. The criminal classification of maxillofacial injuries is a very controversial subject, when these relate to dental interventions, there is a consensus among experts in order to classify them in the item corresponding to no result, that is light bruise, not having generated inability ordinary duties for 30 days, permanent weakness of member, sense or function or venture to perform complex procedures, even knowing his/her limitations. In this study, the specialty referred in the analysis is on the performance of the dental surgeon, not meaning that he/she has expertise in the area, only categorizing according to the procedure performed.By taking the responsibility in the attendance, the professional must have technical and scientific preparation in the area in order to avoid mistakes.This study identified that 54.3% of dentists were generalists, which did not stop them from acting in procedures, requiring greater skills and knowledge. An eleven year retrospective study in Brazil (Santos Pacheco et al.), reported that followed by the general practitioner, orthodontics specialty was the most involved in lawsuits.Two studies conducted in Iran (Hashemipour et al.; Kiani & Sheikhazadi), brought differences in the outcome of the involved procedure, they reported a higher number of cases in Endodontics and Prosthodontics, respectively.The Endodontics, Orthodontics and Prosthetics bring a greater number of legal complaints, as they are prone to a higher expectation of the patient in rehabilitation and aesthetics, and any failure notoriously triggered and claimed. In this study it was observed the predominance of surgery (42.9%), because it is a more invasive nature of the procedure, which provides complications in the course of their practice and injury causes.We also analyzed, which few experts justify its conclusion based on the literature, of the 35 cases examined, only two reports were justified.So without scientific backing in each specific area, 37.1% of the reports were not completed due to lack of documentation and the expert to justify the absence of evidence to evidence whether it was an injury caused by professional intervention or not according to what was claimed by the patient. Although the number of cases is different in different countries and cities, such as Iran and Cairo (Hashemipour et al.;Kiani & Sheikhazadi;Azab, 2013) both in medical and dental practice most cases are part the private sector. As for the patients, men (65.7%) are the ones who show most lesion complaints, more than women.A similar case to that result is a study conducted by Kiani & Sheikhazadi, in Tehran, where the percentage found was 70%.This can be explained by the fact that men have greater potential for complaint when facing permanent deformity; what was found in report analysis, to be justified as insusceptible situations happen in dental care.However, the maxillofacial injuries related to teeth, generate conflicts as its criminal classification (Azab). It is noteworthy, and must be kept in mind that the teeth have many functions, which are, chewing, aesthetics, phonetics and social, so you can classify them correctly.It is important to correctly analyze the craniofacial fractures and disjunctions, which can cause damage: directly, indirectly, mediate or immediately, as well as describe the damage that will be temporary and that which will remain (Hesham, 2013).The loss of anterior teeth, have all the characteristics to fit them as permanent deformity (aesthetic, visible, not repairable course).However, there are several conclusions to similar cases, as reported by da Silva et al. (2009) for a case in which avulsion of two anterior teeth in a woman with previous battery lip (top and bottom) complete was classified as serious injury, and therefore excluded the possibility of deformity.Garbin et al. (2006b), analyzing injuries in female victims of domestic violence report that the doctrine provides very serious injury if it results "in loss or member, sense or function, or permanent deformity" and ratify be what just occurs with loss of dental elements. Differences in the report results are likely to occur, since the individual and specific characteristics of each expert always exist and may justify different result.However, no correlation is observed that at present in this endeavor should not occur, and this is of concern, in as much as legal penalties, lighter or more serious, can be attributed to similar lesions, depending on the expert that evaluates them; bringing the importance of being prepared specific questions for criminal reports for dental work (Curley, 2011).In this sense, Garbin et al. (2008), states that the correct issue is essential for the proper handling of criminal cases, once your correct completion makes it easier the interpretation, discussion and conclusion of an expertise. If there were parameters to be followed for the evaluation of damage caused in the stomatognathic system, taking into account the Brazilian Penal Code, and also, the mandatory forensic dentist's presence in IMLs throughout Brazil, there would be no doubt about the framework of dental injuries, among all professionals concerned, directly linked to processes.These facts would bring benefits to the victim, who would have his/her damage properly qualified, and contribute to the smooth running of the process, becoming the starting point for a civil repair (Gulati et al., 2012). CONCLUSION In this study, we found that most injuries are impaired in procedures that require competence in the specialty, and that generally, interventions are performed by general professional.Some professionals who work in health care in an expanding service offering, however, in most of the cases, do not bother to take specialization courses in their fields, but also ignore the general ethical and specific legal rules, which regulate the profession. The dentists have an obligation to act safely, properly and professionally in treating their patients.Their activity requires ethical and moral responsibilities that must be met under the law.It is noticed that caution is essential for dental care, so that they are not prosecuted by the injury generated in services.RESUMEN: La responsabilidad del dentista acerca de las intervenciones dentales realizadas en el ejercicio de sus actividades tienen responsabilidades civil, ética, administrativa y penal.Cuando se prueba que un resultado es perjudicial para el paciente, por imprudencia, mal praxis o negligencia, el examen de la lesión puede ser ordenado por una autoridad judicial y se analiza por peritos, haciendo que el dentista este sujeto a las sanciones previstas en el Código Penal brasileño, siendo forzado a reparar el daño e indemnizar de acuerdo a la consecuencia causada, con base en el Código Civil, o ambos, pudiendo sufrir una doble acción.Debido al incremento de los procesos legales relacionados con la odontología, el objetivo de esta investigación es dar mayor visibilidad al tema, haciendo hincapié en los aspectos éticos y legales relacionados con la práctica profesional.Para ello, llevamos a cabo un estudio de los informes odontológicos legales de lesiones maxilofaciales en el Instituto de Medicina Legal Nina Rodrigues, Salvador-BA-Brasil, durante los meses de enero 2007 a diciembre 2013.Se analizó la información de los procedimientos realizados, la razón para el informe del perito y su resultados, la responsabilidad profesional y la conclusión propuesta por el peritos.Se encontró que del total de exámenes por injurias a pacientes, la mayoria correspondió al área de cirugía (42,9%), seguido por endodoncia y ortodoncia con 14,3% en cada uno; en el 96% de los casos se involucraron uno o más elementos de responsabilidad profesional, siendo el 47,4% clasificados por los peritos como lesiones menores.Se concluye que el aumento de los litigios por lesiones generadas por la atención odontológica se deben a que el dentista no toma responsabilidad sobre los malos resultados y realiza procedimientos sin tener la habilidad adecuada.Se sugiere realizar una adecuada formación profesional para la adquirir los conocimientos técnicos y científicos de cada área (especialización), lo que permitiria actuar con el máximo cuidado y profesionalismo PALABRAS CLAVE: cuidado dental, lesiones maxilofaciales, odontología forense, análisis de reclamos. Table I . Socio-demographic characteristics of dentist. Fig. 1.Procedures perfomed by the professional. Table II . Socio-demographic characteristics of the patients. Table III . Classification of responsibility as the injury caused. Table IV . Regions damaged.
2018-10-17T03:13:37.614Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "46cbe1613e15d56193980c4b48b078a741c6d233", "oa_license": "CCBYNC", "oa_url": "https://scielo.conicyt.cl/pdf/ijodontos/v9n3/art28.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "46cbe1613e15d56193980c4b48b078a741c6d233", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
233195383
pes2o/s2orc
v3-fos-license
Estimation of Extreme Significant Wave Height in the Northwest Pacific Using Satellite Altimeter Data Focused on Typhoons (1992–2016) : The estimation of extreme ocean wave heights is important for understanding the ocean’s response to long-term changes in the ocean environment and for the effective coastal management of potential disasters in coastal areas. In order to estimate extreme wave height values in the Northwest Pacific Ocean, a 100-year return period were calculated by applying a Peak over Threshold (PoT) method to satellite altimeter SWH data from 1992 to 2016. Satellite altimeter SWH data were validated using in situ measurements from the Ieodo Ocean Research Station (IORS) south of Korea and the Donghae buoy of the Korea Meteorological Administration (KMA) off the eastern coast of Korea. The spatial distribution and seasonal variations of the estimated 100-year return period SWHs in the Northwest Pacific Ocean were presented. To quantitatively analyze the suitability of the PoT method in the Northwest Pacific, where typhoons frequently occur, the estimated 100-year return period SWHs were compared by classifying the regions as containing negligible or significant typhoon effects. Seasonal variations of extreme SWHs within the upper limit of 0.1% and the PoT-based extreme SWHs indicated the effect of typhoons on the high SWHs in the East China Sea and the southern part of the Northwest Pacific during summer and fall. In addition, this study discusses the limitations of satellite altimeter SWH data in the estimation of 100-year extreme SWHs. Introduction Tropical cyclones are accompanied by heavy rainfall, unusually high waves and strong winds, all of which have a great impact on the coastal environment. The intensity of tropical cyclones or the frequency of the most intense cyclones has increased as an effect of climate change [1,2]. The intensity of tropical cyclones is increasing locally as a result of the changes in the cyclones' trajectories and the location of the maximum intensity [3][4][5]. The extreme significant wave height (SWH) is increasing globally as well as regionally, especially in coastal regions [6][7][8][9][10]. Increases in extreme wave height caused by tropical cyclones and hazardous events, combined with predicted sea-level rises (e.g., [11][12][13]), have the potential to increase the magnitude of disasters along with coastal erosion. Therefore, it is very important to estimate the extreme wave height, such as the 100-year return period SWH, as well as to understand the wave variability over decades. It is important to develop sophisticated statistical techniques for estimating extreme wave heights and to understand their spatio-temporal distribution in the global ocean and regional sea. In addition, it is also important to study the applicability and limitations of various estimation techniques that have been developed and utilized in previous studies. In particular, it is valuable to verify that long-term return period SWHs is adequately estimated in seas where extreme events such as typhoons frequently affect the estimation of extreme wave height. The Northwest Pacific has a variety of ocean and atmospheric phenomena that cause the spatial and temporal variability of SWH, as shown in the long-term mean of satellite SWH data from 1992 to 2016 (Figure 1a,b). Previous studies have reported an increasing trend for SHW as well as extreme SWH values corresponding to the upper 1% in the Northwest Pacific (e.g., [36]). In addition, as shown in Figure 1c,d, the Northwest Pacific is a region with one of the highest frequencies of high-intensity tropical cyclones [37]. On average, more than 28 tropical cyclones occur per year [38], and the frequency of tropical cyclones accounts for 30% of the total global tropical cyclones [39]. Therefore, the Northwest Pacific is very suitable for the estimation of the 100-year return period SWH because of the relative abundance of extreme events such as tropical cyclones. Thus, it is important to understand the characteristics of extreme wave height in regions affected by tropical cyclones. Although the spatial distribution of the 100-year return period of SWH in the Northwest Pacific region has been presented [40,41], few studies have been conducted on the effect of tropical cyclones on extreme SWH. Method (IDM), Annual Maximum (AM) method, and Peaks over Threshold (PoT) method have been developed over the past few decades (e.g., [14][15][16][17]). Studies have estimated the extreme wave height in the global ocean and regional seas by applying these statistical methods to buoy measurements and shipborne wave recorder data [7,[18][19][20][21][22][23], satellite observation data [24][25][26][27][28][29][30], or model simulation data [31][32][33][34][35]. It is important to develop sophisticated statistical techniques for estimating extreme wave heights and to understand their spatio-temporal distribution in the global ocean and regional sea. In addition, it is also important to study the applicability and limitations of various estimation techniques that have been developed and utilized in previous studies. In particular, it is valuable to verify that long-term return period SWHs is adequately estimated in seas where extreme events such as typhoons frequently affect the estimation of extreme wave height. The Northwest Pacific has a variety of ocean and atmospheric phenomena that cause the spatial and temporal variability of SWH, as shown in the long-term mean of satellite SWH data from 1992 to 2016 (Figure 1a,b). Previous studies have reported an increasing trend for SHW as well as extreme SWH values corresponding to the upper 1% in the Northwest Pacific (e.g., [36]). In addition, as shown in Figure 1c,d, the Northwest Pacific is a region with one of the highest frequencies of high-intensity tropical cyclones [37]. On average, more than 28 tropical cyclones occur per year [38], and the frequency of tropical cyclones accounts for 30% of the total global tropical cyclones [39]. Therefore, the Northwest Pacific is very suitable for the estimation of the 100-year return period SWH because of the relative abundance of extreme events such as tropical cyclones. Thus, it is important to understand the characteristics of extreme wave height in regions affected by tropical cyclones. Although the spatial distribution of the 100-year return period of SWH in the Northwest Pacific region has been presented [40,41], few studies have been conducted on the effect of tropical cyclones on extreme SWH. The objective of this study is to investigate and verify the extreme wave heights using satellite altimeter-observed data in the Northwest Pacific, where typhoons and extratropical storms occur frequently. This is achieved by (1) validating the satellite altimeter data using in situ measurements; (2) applying the EVA scheme commonly used to estimate the extreme wave heights; (3) comparing the estimated extreme wave heights with the maximum SWH measurements; and (4) investigating the difference in extreme wave height The objective of this study is to investigate and verify the extreme wave heights using satellite altimeter-observed data in the Northwest Pacific, where typhoons and extratropical storms occur frequently. This is achieved by (1) validating the satellite altimeter data using in situ measurements; (2) applying the EVA scheme commonly used to estimate the extreme wave heights; (3) comparing the estimated extreme wave heights with the maximum SWH measurements; and (4) investigating the difference in extreme wave height characteristics between typhoon region and non-typhoon region in the Northwest Pacific. This study also aims to discuss the limitations and precautions for estimating the extreme wave height Remote Sens. 2021, 13, 1063 3 of 19 using satellite altimeter data and to demonstrate the necessity of in situ data to verify the satellite-derived extreme wave heights. Satellite Data In this study, the altimeter SWH data provided by Institut Français de Recherche pour l'Exploitation de la Mer (IFREMER) from January 1992 to December 2016 (25 years) were utilized [42]. This database is composed of nine altimeters (European Remote Sensing-1 (ERS-1), Topography Experiment/Poseidon (TOPEX/Poseidon), European Remote Sensing-2 (ERS-2), Geosat Follow-On (GFO), Joint Altimetry Satellite Oceanography Network-1 (Jason-1), Environmental Satellite (Envisat), Joint Altimetry Satellite Oceanography Network-2 (Jason-2), Cryosat-2, and Satellite for Argos and Altika (SARAL)) data over the study period, as shown in Table 1. The altimeter SWH data used in this study were quality controlled with along-track data. In addition, to improve accuracy and consistency, corrections of each altimeter SWH data were also performed by Queffeulou and Croizé-Fillon [42] by the comparison of satellite data with in situ measurements or an intercomparison between altimeter data. In the Northwest Pacific, these altimeter SWH data were validated to be about 0.1 m in terms of bias and 0.3 m in terms of the root-mean-square error (RMSE) [36]. To investigate if satellite SWH data could be measured near the typhoon center under a condition of severe sea states, we used 10.8 µm channel infrared images of the Communication, Ocean, and Meteorological Satellite/Meteorological Imager (COMS/MI) for a period from 26 to 27 August 2012. During this period, serval satellite altimeter tracks passed over the typhoon Bolaven in the seas around the Korean Peninsula, and satellite SWHs could be obtained for the typhoon event. In-Situ Data The Ieodo Ocean Research Station (IORS), located at 125.18 • E, 32.12 • N in the East China Sea and south of the Korean Peninsula, has been operating since 2003 [43]. It was constructed on an underwater rock with a depth of approximately 40 m. It is far from land or islands, approximately 149 km southwest of Marado, Korea, 276 km from the west of Dorisima, Japan and 287 km from the nearest island of China, as shown in Figure 2b. The SWH measurements from IORS were used from 2005 to 2016 to evaluate the accuracy of the altimeter SWH data in regions with the greatest frequency of extreme conditions such as typhoons. The tracks of satellite altimeters around IORS are presented in Figure 2c-k, which contains all the regular tracks and transitional tracks during the study period from 1992 to 2016. Dorisima, Japan and 287 km from the nearest island of China, as shown in Figure 2b. The SWH measurements from IORS were used from 2005 to 2016 to evaluate the accuracy of the altimeter SWH data in regions with the greatest frequency of extreme conditions such as typhoons. The tracks of satellite altimeters around IORS are presented in Figure 2c-k, which contains all the regular tracks and transitional tracks during the study period from 1992 to 2016. As IORS is a tower, the wave heights are observed using a radar instrument on a platform-based remote sensing system, in contrast to the conventional buoy using an accelerometer to measure the wave height. The observed SWH data might contain abnormal As IORS is a tower, the wave heights are observed using a radar instrument on a platform-based remote sensing system, in contrast to the conventional buoy using an accelerometer to measure the wave height. The observed SWH data might contain abnormal values due to various atmospheric and marine environments and instrumental errors. The Korea Hydrographic and Oceanographic Agency, which distributes the observation data from IORS, has developed a series of quality control procedures for wave height measurements based on the techniques presented by the Intergovernmental Oceanographic Commission [44,45] and Evans et al. [46]. The agency distributes the SWH data from IORS with quality control information. In this study, quality controlled data at 1-h interval with quality flags were used for analysis. In addition, the SWH measurements from the Donghae buoy of the Korea Meteorological Administration (KMA) were utilized. The buoy is located relatively far from the coast (129.95 • E, 37.54 • N), in an area with less influence from typhoons than IORS. The data were collected from 2011 to 2016. The SWH measurements at 1-h interval from Donghae buoy were also quality controlled by applying quality flags provided with the data. As shown in Figure 3a, SWHs higher than 8 m were observed at IORS. The monthly average is a mixture of seasonal variation in which the SWH increases in winter and decreases in summer, and the highest SWH in August due to frequent passage of typhoons (Figure 3c,e). The SWHs from Donghae buoy show more pronounced seasonal variability than those of IORS (Figure 3d,f), but they also exhibit high values exceeding 4.5 m due to typhoons in August-September. Another difference from the SWH measurements from IORS is that there is a peak in April. This is related to the fast-developed low-pressure passing through the East Sea (Sea of Japan) [47,48]. In addition, the SWH measurements from the Donghae buoy of the Korea Meteorological Administration (KMA) were utilized. The buoy is located relatively far from the coast (129.95°E, 37.54°N), in an area with less influence from typhoons than IORS. The data were collected from 2011 to 2016. The SWH measurements at 1-h interval from Donghae buoy were also quality controlled by applying quality flags provided with the data. As shown in Figure 3a, SWHs higher than 8 m were observed at IORS. The monthly average is a mixture of seasonal variation in which the SWH increases in winter and decreases in summer, and the highest SWH in August due to frequent passage of typhoons (Figure 3c,e). The SWHs from Donghae buoy show more pronounced seasonal variability than those of IORS (Figure 3d,f), but they also exhibit high values exceeding 4.5 m due to typhoons in August-September. Another difference from the SWH measurements from IORS is that there is a peak in April. This is related to the fast-developed low-pressure passing through the East Sea (Sea of Japan) [47,48]. Best Track Data of Typhoons Information regarding the occurrence and movement of typhoons from 1992 to 2016 in the study area was obtained from the International Best Track Archive for Climate Stewardship (IBTrACS) version 3 release 10 (v03r10) [49]. The number of typhoons during the study period was calculated in bins of 2 • × 2 • and used as an index to evaluate whether extreme conditions such as typhoons were sufficiently represented in the EVA results. Estimation of Extreme Value In order to understand the spatial distribution of the extreme wave height using the EVA in the Northwest Pacific, SWH data measured along satellite altimeter tracks were sub-sampled within a bin of 2 • × 2 • [24,28,50]. In this study, the PoT method was utilized to estimate extreme SWHs using buoy measurements and satellite altimeter data ( Figure 4). The PoT method, which uses data exceeding a defined threshold, alleviates the limitations of the AM method while maintaining data as being independent and identically distributed. The data greater than the threshold follow a generalized Pareto distribution (GPD) in the EVA [17]: where µ is the location parameter, σ is the scale parameter, and ξ is the shape parameter. As the threshold µ is a factor that affects the result of EVA, it is of considerable importance to select an appropriate threshold value. Too low a threshold causes the estimated extreme value to have a low bias, while too high a threshold does not maintain stability because of excessive suppression of the amount of data [17,21]. Therefore, the stability of parameters should be tested. The parameters were estimated for every value of the threshold [17]. For the selection of the threshold, the shape parameter (ξ) and modified scale parameter (σ * = σ − ξµ) were presented in Figure 5. Based on the results of the stability test, the 99.5th percentile SWH suggested by Méndez et al. [21] was selected among the various threshold values proposed in previous studies [21,24,25,28]. ) with 95% confidence intervals for every value of threshold. The red dot indicates the 99.5th percentile SWH. Validation of Satellite SWH Data The altimeter SWH data were validated by comparison with the SWH measurements from the IORS and Donghae buoy prior to the estimation of the extreme SWH value. Figure 6a,b present a comparison between the altimeter SWH data and the in situ measurements at the IORS and the Donghae buoy. When the collocation procedure between altimeter along-track SWH data and in situ measurements was performed using the criteria of 30 min and 50 km in time and space, the numbers of the matchup data produced at the IORS and Donghae buoy were 833 and 556, respectively. The SWH data from the altimeters were in good agreement with both SWH measurements from IORS and Donghae ) with 95% confidence intervals for every value of threshold. The red dot indicates the 99.5th percentile SWH. Validation of Satellite SWH Data The altimeter SWH data were validated by comparison with the SWH measurements from the IORS and Donghae buoy prior to the estimation of the extreme SWH value. Figure 6a,b present a comparison between the altimeter SWH data and the in situ measurements at the IORS and the Donghae buoy. When the collocation procedure between altimeter along-track SWH data and in situ measurements was performed using the criteria of 30 min and 50 km in time and space, the numbers of the matchup data produced at the IORS and Donghae buoy were 833 and 556, respectively. The SWH data from the altimeters were in good agreement with both SWH measurements from IORS and Donghae To perform the PoT analysis, it was hypothesized that the data should satisfy the precondition of independence between the data. However, a satellite altimeter along the track would be likely to observe similar SWH values within a bin, which may violate the precondition. Taking this into account, we selected one maximum value for each track of the satellite within a given bin. Nevertheless, other satellites were likely to undermine this independence by observing similar SWHs within a given spatial range. As such, an additional constraint was given to use a temporal range for the selection of only one maximum value among the previously selected maximums. Previous studies have mentioned that the independence of data could be ensured by separating these data into specific time intervals such as two days [35,51] or three days [21]. In this study, data separated at three-day time intervals were used for the EVA. In the PoT method, the probability level for the 100-year return period SWH is determined as follows: where N PoT is the number of exceedances used in the PoT analysis, and N Y is the number of years covered by the analysis. As mentioned above, the PoT method complements the defects of the annual maximum method, but also has a limitation in that this method shows an extreme dependency on the threshold, and variations of the estimated extreme value were amplified if insufficient data were available for this approach. Validation of Satellite SWH Data The altimeter SWH data were validated by comparison with the SWH measurements from the IORS and Donghae buoy prior to the estimation of the extreme SWH value. Figure 6a,b present a comparison between the altimeter SWH data and the in situ measurements at the IORS and the Donghae buoy. When the collocation procedure between altimeter along-track SWH data and in situ measurements was performed using the criteria of 30 min and 50 km in time and space, the numbers of the matchup data produced at the IORS and Donghae buoy were 833 and 556, respectively. The SWH data from the altimeters were in good agreement with both SWH measurements from IORS and Donghae buoy. The comparison resulted in RMSE of 0.38 m at the IORS and 0.23 m at Donghae buoy, respectively. Compared with the SWH measurement from IORS, it was noted that there was a tendency for the altimeter SWH data to be slightly overestimated with a positive bias error of 0.23 m. This tendency was found to be due to a mixture of errors caused by the characteristics of the IORS platform, which observes the wave height using a microwave instrument at a height of 35 m in addition to satellites [52]. In contrast, the altimeter SWH data showed an insignificant bias of 0.01 m in comparison with the measurements from the Donghae buoy. The IORS and the Donghae buoy measure SWH data with a high temporal resolution of approximately 1 h at a point location, while the altimeter data are sparsely distributed with different temporal differences. Therefore, the SWH data may differ from each other depending on these observation characteristics. Considering the differences, monthly averaged values and the maximum values of the SWHs were calculated for both in situ measurements and satellite data. In the case of altimeter SWH data, the mean and maximum values were calculated using along-track data within a bin of 2 • × 2 • based on the location of the IORS and the Donghae buoy. Figure 6c-f indicates the comparisons of the monthly means of wave heights from the in situ observation stations and satellite observations. Overall, both the average and maximum SWHs observed from satellites and point stations were in good agreement for the entire period. Two large peaks of more than 10 m in the monthly maximum ( Figure 6d) were generated by the typhoons Muifa and Bolaven, which passed over the IORS in August 2011 and August 2012, respectively. Similar values were measured for both the IORS and the altimeter. However, the maximum SWH obtained from the IORS showed a peak in August 2010 when Kompasu passed, while the maximum SWH from the altimeter did not show these characteristics. When comparing the monthly maximum SWH observed from Donghae buoy and altimeters, some peaks measured in the Donghae buoy did not appear in the altimeter observations (Figure 6f). This suggests that the altimeters failed to observe the extreme waves generated by storms that occurred suddenly in the East Sea (Sea of Japan). This is related to the fast movement of the typhoons at a relatively high speed and the fast-developed low-pressure [47,48]. Remote Sens. 2021, 13, x FOR PEER REVIEW 8 of 20 The IORS and the Donghae buoy measure SWH data with a high temporal resolution of approximately 1 h at a point location, while the altimeter data are sparsely distributed with different temporal differences. Therefore, the SWH data may differ from each other depending on these observation characteristics. Considering the differences, monthly averaged values and the maximum values of the SWHs were calculated for both in situ measurements and satellite data. In the case of altimeter SWH data, the mean and maximum values were calculated using along-track data within a bin of 2° × 2° based on the location of the IORS and the Donghae buoy. Figure 6c-f indicates the comparisons of the monthly means of wave heights from the in situ observation stations and satellite observations. Overall, both the average and maximum SWHs observed from satellites and point stations were in good agreement for the entire period. Two large peaks of more than 10 m in the monthly maximum ( Figure 6d) were generated by the typhoons Muifa and Bolaven, which passed over the IORS in August 2011 and August 2012, respectively. Similar values were measured for both the IORS and the altimeter. However, the maximum SWH obtained from the IORS showed a peak in August 2010 when Kompasu passed, while the maximum SWH from the altimeter did not show these characteristics. When comparing the monthly maximum SWH observed from Donghae buoy and altimeters, some peaks measured in the Donghae buoy did not appear in the altimeter observations (Figure 6f). This suggests that the altimeters failed to observe the extreme waves generated by storms Figure 7 shows the results of extreme SWH estimates by applying the PoT method using the GPD to the satellite observed SWH data from the sea surrounding in situ measurement stations of the IORS and the Donghae buoy. The small plot on the upper right side of each figure presents a comparison between the quantile of the observed data and the quantile of the PoT estimated value of altimeter-observed SWH data from the seas surrounding the IORS and Donghae buoy. Since relatively high SWH data were selected by excluding the SWHs smaller than the threshold value, as shown in red curves of Figure 7, the fitting of GPD was expected to represent the characteristics of the distribution of extreme wave heights. The quantiles estimated using the determined probability density function (PDF) were in good agreement with the observed quantiles from altimeter data near the IORS and the Donghae buoy. The estimated 25-year and 50-year return period SWHs were 11.83 m and 13.95 m for the satellite SWH data near the IORS, respectively ( Table 2). The 100-year return period SWHs were estimated to be 16.49 m for the altimeter data near the IORS. The estimated extreme SWHs were higher than the maximum SWHs with difference values of 3.66 m and 0.20 m around the IORS and the Donghae buoy, respectively. Considering that the estimated 100-year return period SWHs were higher than the maximum values, it may be reasonable to estimate extreme SWH by applying the PoT method for the study area. Estimation of Extreme SWH Using PoT Method ( Table 2). The 100-year return period SWHs were estimated to be 16.49 m for the altimeter data near the IORS. The estimated extreme SWHs were higher than the maximum SWHs with difference values of 3.66 m and 0.20 m around the IORS and the Donghae buoy, respectively. Considering that the estimated 100-year return period SWHs were higher than the maximum values, it may be reasonable to estimate extreme SWH by applying the PoT method for the study area. Figure 8 shows the estimated SWHs, marked red lines, as a function of the return periods from 1 year to 100 years using satellite data near the IORS and the Donghae buoy. Figure 8 shows the estimated SWHs, marked red lines, as a function of the return periods from 1 year to 100 years using satellite data near the IORS and the Donghae buoy. The dash lines represent the upper and lower limits of the estimated SWHs within 95% confidence interval. The confidence interval was calculated by using variance-covariance matrix and the delta method as described in [17]. In the sea near the IORS, the satellitebased extreme SWH at 100-year return period was relatively large of 16.49 m with a confidence interval of approximately 5.04 m. In the region near the Donghae buoy in the eastern coastal region of the Korean Peninsula with deep water depth, however, the 100-year return period of SWH was 7.43 m with the upper and lower limits of the estimated return level amounted to 7.94 m and 6.93 m. confidence interval. The confidence interval was calculated by using variance-covariance matrix and the delta method as described in [17]. In the sea near the IORS, the satellitebased extreme SWH at 100-year return period was relatively large of 16.49 m with a confidence interval of approximately 5.04 m. In the region near the Donghae buoy in the eastern coastal region of the Korean Peninsula with deep water depth, however, the 100-year return period of SWH was 7.43 m with the upper and lower limits of the estimated return level amounted to 7.94 m and 6.93 m. Spatial Distribution of Extreme Significant Wave Heights from Altimeter Data To understand the spatial distribution of extreme SWHs in the Northwest Pacific, the 100-year return period SWHs within a bin of 2° × 2° were estimated using the PoT method ( Spatial Distribution of Extreme Significant Wave Heights from Altimeter Data To understand the spatial distribution of extreme SWHs in the Northwest Pacific, the 100-year return period SWHs within a bin of 2 • × 2 • were estimated using the PoT method ( Seasonal Variability of Mean and Maximum Significant Wave Heights To understand the differences between the 100-year extreme SWHs and the mean SWHs over the past decades, we investigated the monthly distributions of SWHs for the period of 1992 to 2016, which are shown in Figure 10 In January, the SWHs were still high but began to decrease by less than 3 m in the northeastern region. In the spring, from March to May, the SWHs were remarkably reduced to less than 1 m in the marginal seas. This tendency of relatively small SWHs (<1 m) in the marginal seas lasted from summer June to August. Therefore, the spatial distribution of the 100-year return period SWHs with the PoT analysis in Figure 9 could be asserted to reflect the characteristics of the extreme conditions in the northeastern part of winter. Considering the relatively small SWHs in the East China Sea in summer, the extreme SWHs in this region originate from the activity of typhoons rather than ordinary SWHs in summer. As mentioned previously, the mean SWHs were relatively small, at less than 1.5 m in the East China Sea in summer. This yielded high differences from the 100-year return period SWHs in this region. Accordingly, the seasonal variations of extreme SWHs within the upper 0.1% were examined, as shown in Figure 11. Similar to the seasonal mean of the SWHs, the extreme SWHs also represented the highest values, of up to 14 m, in the eastern part in winter from December to February (Figure 11d). One of the most remarkable differences between the means of all SWHs ( Figure 10) and the extreme SWHs within the upper 0.1% (Figure 11) was found in summer and fall. There were high values of extreme SWHs amounting to 8-10 m in the East China Sea and south of Japan in the Northwest Pacific. This implies that some of the high SWHs occur seldomly but appear with extremely high wave heights in summer and fall. This suggests a potential role for tropical storms in the regions during summer and fall. Remote Sens. 2021, 13, x FOR PEER REVIEW 12 of 20 reflect the characteristics of the extreme conditions in the northeastern part of winter. Considering the relatively small SWHs in the East China Sea in summer, the extreme SWHs in this region originate from the activity of typhoons rather than ordinary SWHs in summer. As mentioned previously, the mean SWHs were relatively small, at less than 1.5 m in the East China Sea in summer. This yielded high differences from the 100-year return period SWHs in this region. Accordingly, the seasonal variations of extreme SWHs within the upper 0.1% were examined, as shown in Figure 11. Similar to the seasonal mean of the SWHs, the extreme SWHs also represented the highest values, of up to 14 m, in the eastern part in winter from December to February (Figure 11d). One of the most remarkable differences between the means of all SWHs ( Figure 10) and the extreme SWHs within the upper 0.1% (Figure 11) was found in summer and fall. There were high values of extreme SWHs amounting to 8-10 m in the East China Sea and south of Japan in the Northwest Pacific. This implies that some of the high SWHs occur seldomly but appear with extremely high wave heights in summer and fall. This suggests a potential role for tropical storms in the regions during summer and fall. Seasonal Variation of 100-Year Return Significant Wave Heights The relatively weak seasonal variation of the estimated extreme 100-year return SWH using the PoT analysis was found in the Northwest Pacific ( Figure 12). The spatial average of the 100-year return period SWH with the PoT analysis was estimated to be about 10 m in the winter. In contrast, it was calculated to be about 8.3 m in the summer, which was approximately 1 m higher than the maximum values of the upper 0.1% of SWHs ( Figure Figure 11. Seasonal distribution of significant wave heights from satellites within the upper 0.1% in the Northwest Pacific in (a) spring (March, April, and May), (b) summer (June, July, and August), (c) fall (September, October, and November), and (d) winter (December, January, and February) for the period of 1992 to 2016. Seasonal Variation of 100-Year Return Significant Wave Heights The relatively weak seasonal variation of the estimated extreme 100-year return SWH using the PoT analysis was found in the Northwest Pacific ( Figure 12). The spatial average of the 100-year return period SWH with the PoT analysis was estimated to be about 10 m in the winter. In contrast, it was calculated to be about 8.3 m in the summer, which was approximately 1 m higher than the maximum values of the upper 0.1% of SWHs ( Figure 11). The overall results of the PoT analysis showed a similar spatial pattern to the upper 0.1% of SWHs, but due to the limited data and high spatial variability, some abnormal values can be seen in the study area. In winter and spring, extreme SWHs had a tendency to be high at relatively high latitudes (>40 • N). As described earlier, the highest extreme SWH region (>10 m) in the East China Sea during summer (Figure 12b) seemed to be due to the effect of typhoons (Figure 1c,d). The effects of extreme conditions and latitudinal tendencies were also detected in the fall (Figure 12c). Effect of Tropical Cyclones on the Estimation of Extreme SWH In the previous sections, it was hypothesized that high PoT-based SWHs in summer were associated with typhoons. To clarify whether these are indeed related to typhoons, we classified all the spatial grids into two regions of typhoon and non-typhoon by applying a limit to the number of typhoons (N = 10) in each bin. Figure 13a shows the histogram of the differences between PoT-derived extreme SWHs and the maximum SWH in the typhoon region with a frequency of greater than 10, corresponding to a cumulative percentage of typhoon passage frequency of 60% approximately as shown in Figure 1c,d. The differences between the 100-year return SWH obtained from the PoT analysis and the maximum SWHs showed positive values in nearly all regions (>99.5%) regardless of the number of typhoons, which indicated that the characteristics of extreme conditions were appropriately reflected in the EVA (Figure 13a,d). The maximum count was found at approximately 4 m, which was similar to that of non-typhoon regions, as shown in Figure 13d. Although the maximum frequency appeared at a difference of the SWHs in both typhoon and non-typhoon regions, their occurrence months, with a high frequency of high SWHs within the upper 0.1%, differed as they appeared in August to October in the case of typhoon regions (Figure 13b) and in December to February in the case of non-typhoon regions (Figure 13e). In total, 56.6% of the typhoon regions appeared to present the maximum SWH between August and October, whereas 67.5% of the non-typhoon region showed the maximum value in winter. The PoT-derived SWHs showed a high frequency in fall (from September to November) in the typhoon region, while the maximum fre- Effect of Tropical Cyclones on the Estimation of Extreme SWH In the previous sections, it was hypothesized that high PoT-based SWHs in summer were associated with typhoons. To clarify whether these are indeed related to typhoons, we classified all the spatial grids into two regions of typhoon and non-typhoon by applying a limit to the number of typhoons (N = 10) in each bin. Figure 13a shows the histogram of the differences between PoT-derived extreme SWHs and the maximum SWH in the typhoon region with a frequency of greater than 10, corresponding to a cumulative percentage of typhoon passage frequency of 60% approximately as shown in Figure 1c,d. The differences between the 100-year return SWH obtained from the PoT analysis and the maximum SWHs showed positive values in nearly all regions (>99.5%) regardless of the number of typhoons, which indicated that the characteristics of extreme conditions were appropriately reflected in the EVA (Figure 13a,d). The maximum count was found at approximately 4 m, which was similar to that of non-typhoon regions, as shown in Figure 13d. Although the maximum frequency appeared at a difference of the SWHs in both typhoon and nontyphoon regions, their occurrence months, with a high frequency of high SWHs within the upper 0.1%, differed as they appeared in August to October in the case of typhoon regions ( Figure 13b) and in December to February in the case of non-typhoon regions (Figure 13e). In total, 56.6% of the typhoon regions appeared to present the maximum SWH between August and October, whereas 67.5% of the non-typhoon region showed the maximum value in winter. The PoT-derived SWHs showed a high frequency in fall (from September to November) in the typhoon region, while the maximum frequency appeared in winter (December to February) in the non-typhoon region, as shown in Figure 13c The distributions of SWHs showed considerable differences between the typhoon region and in the non-typhoon region. In order to investigate the characteristics of SWH distribution in the typhoon and non-typhoon region in more detail, two points representing the typhoon (128°E , 30°N ) and non-typhoon (174°E , 52°N ) region were selected, and the PDFs were determined by PoT analysis (Figure 14). In the selected point representing the typhoon region (128°E , 30°N ), about 30 typhoons passed over the study period. As expected, the quantiles estimated by the PoT analysis showed relatively good agreement with the observed quantiles (Figure 14a). In addition, as shown in Figure 14b, the PoT analysis estimated appropriate extreme SWHs in the point representing the non-typhoon region. With the accumulation of satellite-observed SWH data over several decades, it can also be suggested that the PoT analysis can be used to estimate more reliable extreme SWHs in the Northwest Pacific. The distributions of SWHs showed considerable differences between the typhoon region and in the non-typhoon region. In order to investigate the characteristics of SWH distribution in the typhoon and non-typhoon region in more detail, two points representing the typhoon (128 • E, 30 • N) and non-typhoon (174 • E, 52 • N) region were selected, and the PDFs were determined by PoT analysis (Figure 14). In the selected point representing the typhoon region (128 • E, 30 • N), about 30 typhoons passed over the study period. As expected, the quantiles estimated by the PoT analysis showed relatively good agreement with the observed quantiles (Figure 14a). In addition, as shown in Figure 14b, the PoT analysis estimated appropriate extreme SWHs in the point representing the non-typhoon region. With the accumulation of satellite-observed SWH data over several decades, it can also be suggested that the PoT analysis can be used to estimate more reliable extreme SWHs in the Northwest Pacific. Discussion Although extreme SWHs from PoT analysis are evaluated reliably in the Northwest Pacific, where typhoons are frequently generated, this method may also have some limitations. Figure 15 shows the 11 μm channel image of COMS/MI from August 26 to 27, 2012, when the typhoon Bolaven transected the Korean Peninsula. In Figure 15, all the altimeter data along the tracks of SWH observations were within 3 h of the COMS/MI observation time. Typhoon Bolaven passed near the IORS at 15 UTC on August 27 2012, and an enormous SWH with a peak value of 11.1 m was measured by the instrument from the IORS (Figure 3). At nearly the same time, the altimeter observed an SWH of about 12 m while passing through the center of the typhoon (Figure 15g). As may be observed from the time series of the altimeter observations in Figure 15, the altimeters did not obtain data for the entirety of the storm, although all satellite altimeter data from Jason-1/2 and Cryosat-2 were collected. Therefore, it is highly plausible that extreme SWHs may be underestimated in regions in which the altimeter is unable to observe the high SWHs generated by storms. The observation limit of the altimeter is reported to be sensitive to PoT analysis [28]. In order to calculate the extreme wave heights with high confidence, the duration of the wave height data should be sufficiently long. Especially in the Northwest Pacific, where typhoons occur irregularly in space and time, the amount of satellite data should be accumulated over a longer period in order to include the effects of typhoons. Discussion Although extreme SWHs from PoT analysis are evaluated reliably in the Northwest Pacific, where typhoons are frequently generated, this method may also have some limitations. Figure 15 shows the 11 µm channel image of COMS/MI from 26 to 27 August 2012, when the typhoon Bolaven transected the Korean Peninsula. In Figure 15, all the altimeter data along the tracks of SWH observations were within 3 h of the COMS/MI observation time. Typhoon Bolaven passed near the IORS at 15 UTC on 27 August 2012, and an enormous SWH with a peak value of 11.1 m was measured by the instrument from the IORS (Figure 3). At nearly the same time, the altimeter observed an SWH of about 12 m while passing through the center of the typhoon (Figure 15g). As may be observed from the time series of the altimeter observations in Figure 15, the altimeters did not obtain data for the entirety of the storm, although all satellite altimeter data from Jason-1/2 and Cryosat-2 were collected. Therefore, it is highly plausible that extreme SWHs may be underestimated in regions in which the altimeter is unable to observe the high SWHs generated by storms. The observation limit of the altimeter is reported to be sensitive to PoT analysis [28]. In order to calculate the extreme wave heights with high confidence, the duration of the wave height data should be sufficiently long. Especially in the Northwest Pacific, where typhoons occur irregularly in space and time, the amount of satellite data should be accumulated over a longer period in order to include the effects of typhoons. Both buoys and altimeters have limitations on the measurements of extreme waves because the buoys measure instantaneous extreme waves in a single point and satellite altimeters measure a mean value in a footprint. Thus, a difference in the resulting measurements will inevitably appear. Despite the differences in these observations, field observations of wave heights are very important and provide us with invaluable verification data. In this regard, observational data from the IORS in the center of the East China Sea can provide very important clues regarding wave height changes as well as other oceanic and atmospheric values when typhoons or extreme events occur. In light of this, if sufficient data are accumulated, the IORS can be one of the most important sites for the validation of extreme SWHs using satellite data and for diverse kinds of oceanic research. Thus, more buoy stations for in situ measurements should be installed and operated in near-real time for the high-performance estimation of 100-year period extreme SWHs. osat-2 were collected. Therefore, it is highly plausible that extreme SWHs may be underestimated in regions in which the altimeter is unable to observe the high SWHs generated by storms. The observation limit of the altimeter is reported to be sensitive to PoT analysis [28]. In order to calculate the extreme wave heights with high confidence, the duration of the wave height data should be sufficiently long. Especially in the Northwest Pacific, where typhoons occur irregularly in space and time, the amount of satellite data should be accumulated over a longer period in order to include the effects of typhoons. The SWH data were obtained from satellite altimeters of (b) Jason-1 (right) and Jason-2 (left), (d) Jason-1 (right) and Jason-2 (left), (e) Cryosat-2, (g) Cryosat-2 (middle), Jason-1 (right) and Jason-2 (left) in the upper right corner, and (h) Jason-1 (right) and Jason-2 (left). Conclusions The 100-year return period of SWHs was estimated by applying the PoT method as one of representative EVA methods. The PoT-derived SWHs were compared with the maximum SWHs within the upper 0.1% of satellite observations in the Northwest Pacific. Despite many shortcomings, including the limitations of the PoT method and the unevenness of observations in satellite altimeter data, the PoT method supported our hypothesis by presenting higher SWHs than the maximum values observed from 1992 to 2016. The comparisons of the data were performed by classifying the extreme SWHs into two different regions-a typhoon region and non-typhoon region-by defining the thresholds of the frequency of the number of typhoons in each bin. As a result, the PoTderived 100-year extreme SWHs revealed the characteristic variations affected by not only typhoons in the East China Sea during typhoon periods in summer and fall but also by winter-time high SWHs in the northeastern part of the study region in the Northwest Pacific. Overall differences of SWHs between PoT-derived extremes and the maximum values appeared at approximately 4 m, and the highest differences were approximately 8 m. The present PoT method represents the SWHs in extreme events well. However, there is a potential limitation in terms of bias due to the few sampling problems when an altimeter observes the SWH along the track in the nadir direction. Nonetheless, satellite altimeter data are very valuable for estimating the extreme value of SWHs as they cover the global ocean as well as regional seas. In addition, if sufficient data are accumulated, it is also expected that the IORS could be a good candidate for investigating oceanic responses to the typhoons that frequently pass over the station. Data Availability Statement: All data used in this study are available from IFREMER (satellite altimeter SWH data, ftp://ftp.ifremer.fr/ifremer/cersat/products/swath/altimeters/waves/ (accessed on 20 January 2021)), KMA (buoy and COMS data, https://nmsc.kma.go.kr/ (accessed on 20 January 2021)), or KHOA (IORS data, http://www.khoa.go.kr/ (accessed on 20 January 2021)). Acknowledgments: In situ data from the Ieodo Ocean Research Station (IORS) was provided by the IORS project of the Korean Hydrographic and Oceanographic Agency, Korea. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The
2021-04-10T13:21:05.209Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3b98b3ed40b0ad69e840a92d0b48108a69ad99d5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/13/6/1063/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6cb90ade544895ee118ecda60ed9ce1f783c5015", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
55956809
pes2o/s2orc
v3-fos-license
CORPORATE GOVERNANCE AND PERFORMANCE OF NIGERIAN LISTED FIRMS: FURTHER EVIDENCE This work, in an agency framework, adds to the few literatures on Nigeria by examining the impact of corporate governance on firm financial performance. Using a sample of 64 listed non-financial firms for the period 2002 to 2006, the study is able to capture the impact of the New Code of Corporate Governance released in 2003 on previous findings. Introductory investigations on the Nigerian capital market operations and regulations depict low, but improving, states. Empirically, Panel regression estimates show that board size, audit committee independence and ownership concentration aid performance. Higher independent directors and directors’ portion of shares unexpectedly dampen performance, while firms vesting both the roles of CEOs and chairs in the same individual perform better. Introduction and Problem Statement The concept of corporate governance looks at the best approach to solve the problem of adverse selection and moral hazard attendant on principal-agent issues.According to Senbet and John (1998), "corporate governance involves how all stakeholders in the firm attempt to ensure that managers and other insiders adopt mechanisms that safeguard the interests of the stakeholders".In recent times, the term stakeholder has been accorded a broader perspective; it goes beyond its traditional treatment as shareholders to include employees, creditors, government and others, for instance, environmentalists. Notionally, corporate governance practices are expected to: (a) focus board attention on optimizing the company's operating performance and returns to shareholders, (b) ensures that directors made accountable to shareholders and management accountable to directors (c) both corporate directors and management have a long-term strategic vision that, at its core emphasizes sustained shareholder value.Further, despite differing investment strategies and tactics, shareholders should encourage corporate management to resist short-term behaviours by supporting and rewarding long-term superior returns. In addition (e) information about companies must be readily transparent to permit accurate market comparisons (CalPERS, 2007). Organisations like the World Bank, Organisation for Economic Cooperation and Development (OECD), Banks, Funds, Stock Exchanges of countries, Commonwealth and several others, are giving critical interests to the issue of corporate governance.This is evident in several releases of updated code of corporate governance documents and conferences especially, following scandals witnessed in Adelphia, Enron and WorldCom. Generally, well-governed firms are expected to have higher profits, less bankruptcy risk, higher valuations and pay out more cash to their shareholders, while reverse holds for poorly-governed firms (Kyereboah-Coleman and Biekpe, 2006). Several studies have established the importance of good corporate governance to enhanced firm performance (Sanda et Adelegan (2006;2007). This present work contributes to the literature by utilising a more recent data (2002)(2003)(2004)(2005)(2006) than those employed by previous empirical studies in Nigeria.For example, Adenikinju (2005); Sanda et al. (2005) and Magbagbeola (2006) use the periods 1993-2002, 1996-1999 and 1999-2004 respectively.More importantly, this study covers the era of the new Code of Corporate Governance released in 2003, and therefore the impact of the code can easily be captured.Adenikinju (2005) only succeeded in describing the provisions of the Code, however, due to her sample period (1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002), she was unable to empirically determine the effect of the Code on firm performance, which is part of the issues addressed in this paper. The broad objective of this study is to establish the impact of corporate governance measures on financial performance of Nigerian listed firms.Specifically, the study gives an overview of structure and development in the Nigerian capital market, the state of corporate governance in Nigeria is discussed, industrial and temporal patterns of governance and performance indicators are examined, efforts are made at establishing the impact of corporate governance measures on the performance of Nigeria listed firms, and finally, some policy issues are articulated. The rest of this paper is organised as follows; section two contains the background of the study, while section three presents the literature review.Section four is the theoretical framework and methodology while, section five presents the empirical results and analysis, and finally, section six concludes the paper. Background of Study The structural characteristics of the capital market and the historical developments of corporate governance in Nigeria are presented in this section. The Nigerian Capital Market Participants in the Nigerian capital market include the Nigerian Stock Exchange (NSE), Discount Houses, Development Banks, Investment Banks, Building Societies, Stock Broking Firms, Insurance and Pension Organizations, Quoted Companies, the Government, Individuals and the Nigerian Securities and Exchange Commission (SEC).The Nigerian Stock Exchange (NSE) provides the essential facilities for companies and government to raise money for business expansion and development projects (through investors who own shares in the companies) for the ultimate economic benefit of the society.Like all stock exchanges, the NSE is made up of many markets, including a market for new issues (Primary Market), market for existing securities (Secondary Market) and markets for debt securities and equities.The Nigerian stock exchange (NSE) earlier called the Lagos Stock Exchange (LSE) was registered on 1 st March 1959, incorporated on 15 th , September 1960 and started business on 5 th June 1961.In December 1977, its name was changed from the Lagos Stock Exchange to the Nigerian Stock Exchange (NSE) and additional branches have since then been opened in Kaduna, Port Harcourt, Kano, Ibadan, Onitsha and Abuja. The Second-Tier Securities Market (SSM) was established on 30 th April 1985 to assist small and medium-sized companies that are unable to meet the requirements of the first-tier market (NSE) in raising long-term capital.To encourage the development of the SSM, the stringent conditions for enlistment in the first-tier market were relaxed for indigenous enterprises seeking to raise funds through the SSM. The major recent developments in the NSE include the following; the transition from the Callover trading system to the Automated Trading System (ATS) on April 27, 1999, the commissioning of the Electronic Business (e-business) platform in July, 2003 and lastly, the trade alert information system launched in 2005 providing text messages on mobile phones of stockholders of any transactions in their stock within 24 hours.These developments are aimed at reducing the information asymmetry and transaction costs associated with stock trading; enhance transparency and curbing unethical practices in the Nigerian capital market Adelegan (2007a). Features of the Nigerian Capital Market We discuss the major features of the Nigerian capital market under the following indices: market size, market concentration, efficiency and liquidity Market Size Measures of market size considered are; the number of listed securities and their growth rates, the size of market capitalization and its growth rates, and the market capitalization ratio (i.e. the ratio of value of shares listed to GDP).Source: Author's computations: underlying data are obtained from NSE Factbook (various issues).The trends in the market capitalisation and the capitalisation ratio can be observed from figures 2.2 and 2.3.On the average, 276 firms are listed with period average growth rate of 3.24%.The average market capitalisation is N2,450.98bwith 61.79% average growth rate.Finally, the average capitalisation ratio for the period is 22.71%. Efficiency of the Nigerian Securities Market In an efficient market, prices fully and correctly reflect all available and relevant information, and security prices adjust instantaneously to new information.Market efficiency operates at three levels, viz: weak market efficiency, semi-strong market efficiency and strong market efficiency (Anyanwu et al, 1997 andAdelegan, 2004). There are few studies trying to test the market efficiency of the Nigerian capital market, most of these are tests of the weak form efficiency.Most studies have found the Nigerian capital market to be weakly efficient, while the fewer ones examining the Nigerian capital market efficiency at the semi-strong form found mixed evidence (Adelegan, 2004). In the semi-efficient form, Emenuga (1989), Oludoyi (1999), Adelegan (2001) and Adelegan (2007b) find that the Nigerian capital market is not efficient.However, tests on strong-form efficiency are yet to be performed on Nigerian data. Recently, Adelegan (2004) validates the weak form Efficient Market Hypothesis (EMH) using serial correlation tests.However, the employed runs test invalidates this finding.These therefore made the author to conclude that we can neither accept nor reject the weak form EMH for the Nigerian Stock Exchange.Further, Adelegan, 2007b shows that board changes have information content which is reflected in share price behaviour thereby validating semistrong inefficiency status of Nigerian Stock exchange. 2.2.3. The Liquidity of the NSE The liquidity of a stock market can be defined as the ease with which shares are traded in the market.This can be measured by the two main indices: ratio of the securities traded to the GDP (total value traded-GDP ratio) and the turnover ratio (i.e. the percentage value of shares traded to market capitalization.These are shown in table 2. Source: Author's computations: underlying data are obtained from NSE Factbook (various issues). Table 2 shows an upward trend in the turnover of the NSE, rising from the lowest of N60.3 million in 2002 to the highest value of N470.3 million in 2006, with a period mean value of N228.02million.The value traded-GDP ratio, expressed as a percentage displayed a rising pattern, rising from a low of 0.94% in 2002 to 2.60% in 2006 cumulating to an average value of 2.13%.Equally, the turnover ratio exhibited an upward trend during the study period, rising from a low value of 7.89% in 2002 to a high of 9.19% in 2006 with an average value of 9.14%.These increasing indices provide evidence that the growth of trading activities in the NSE leads the growth of the stock market (capitalisation).Implying that there is an increasing liquidity of the NSE.Therefore, as shown by the total value traded/GDP ratio, the NSE shows low but increasing trading activities. 2.3. An Appraisal of Corporate Governance in Nigeria Some efforts have been made at espousing corporate governance in Nigeria and each new one is directed at solving newly emerged problems of governance or existing ones that are inadequately addressed by preceding regulations.The Companies and Allied Matters Decree (CAMD) of 1990 as the basic company law lays more emphasis on provisions that engender financial transparency, which was seen as the most pressing need at that period. Further, consequent on scandals observed in some large corporations like Enron, Aldephia and WorldCom, greater attention has been accorded governance issues to obviate reoccurrence across countries.Nigeria therefore, realizing the need to align with the international best practices identifies board composition and operations as the major weakness in the current corporate governance practice in Nigeria.Hence, the release in 2003 of the code of Corporate Governance in Nigeria by SEC and CAC and Code of Corporate Governance for Banks in Nigeria Post Consolidation in 2006 by CBN.Although previous corporate laws in Nigeria attempt at protecting the often-violated shareholders' right, the SEC release on the Conduct of Shareholders Association in Nigeria (2007), more than ever before, is designed to ensure that association members uphold high ethical standard and make positive contributions in ensuring that the affairs of public companies are run in an ethical and transparent manner and in compliance with the code of corporate governance for public companies. In a survey of Nigeria by the Securities and Exchange Commission (SEC) reported in a publication in April 2003, it is shown that corporate governance was at a rudimentary stage, as only about 40% of quoted companies, had recognised codes of corporate governance in place.This is aggravated, as most businesses in the formal sector are not publicly listed.DPC (1999), in a survey of enterprises in six randomly selected states found that only 13.3% of the enterprises are listed on the Nigerian Stock Exchange, while 48.5% are limited liability companies.Thus, close to 38% of companies operating in the formal sector operate outside the provisions of the company law and nearly 87% of formal sector businesses may be operating outside the legislation governing the capital market (Oyejide and Soyibo, 2001). To evaluate the standard of Corporate Governance in Nigeria, Oyejide and Soyibo (2001) surveyed regulatory agency in Nigeria using the OECD scoring guide.They find out that largely the institutions and the legal framework for effective corporate governance appear to be in existence.However, compliance and/or enforcement appear to be weak or non-existent, this is in consonance with the position of Wilson (2006).Adelegan (2007a) in her work on Corporate Governance in Nigeria, opines, "Corporate Governance in Nigeria can be viewed as satisfactory based on some measures, volume and turnover ratios are reasonable, the underlying regulations and the powers of the regulatory bodies are modelled on those of UK and the US Securities and Exchange Commission.Disclosure and accounting rules are strict and moderately enforced", she however noted that the market for corporate control is very weak in Nigeria. The underdevelopment and emerging nature of the Nigeria capital market as characterised by thinness of trading, low market capitalisation, low percentage of turnover level and illiquidity of the market (Adelegan, 2004) notwithstanding, the Nigeria Security and Exchange Commission (SEC) along with other agencies like the Corporate Affairs Commission (CAC) and the Central Bank of Nigeria (CBN) are still meeting up to the task in their enactment of relevant policy that can foster good corporate governance. Empirical Review This section reviews past works that have tried to empirically validate the relationship that exist between measures of corporate governance and firm performance.Several mechanisms of corporate governance have been identified in literature as influencing firm performance.Given below are some of these mechanisms along with their direction of impact on firm performance.These are also summarised in Table 3. Shareholders right and firm performance have been seen to be related.Shareholder rights reflect the balance of power between shareholders and management.According to Ashbaugh-Skaife and Collins (2005), "A key element of this dimension is whether the firm maintains a level playing field for corporate control and whether it is open to changes in management and ownership that provide increased shareholder value".Gompers, Ishii and Metrick (2003) compute a corporate governance index from 24 governance factors grouped into 5 and via this establish a positive association between stronger shareholder rights and higher firm value.Barber, Kang and long (2005) in a cross-sectional study of a large sample of widely-held U.S. Firms find that firms with significant restrictions against shareholder participation have greater propensity to commit accounting misstatement.Firms with weaker shareholder rights have also been found to exhibit significant operating underperformance (Core et al, 2005), higher expected returns (Chen et al, 2004),higher credit ratings (Ashbaugh-Skaife and Collins, 2005).Chidambaran et al (2007) however, find no significant relationship between shareholders right and firm performance. Debt, corporate governance and performance have been linked together.For instance, debt owed to large creditors is expected to improve firm performance, since the creditors tend to see to it that the firm is well managed (Sanda et al, 2005).Sakai and Asaoka (2003) in a panel data of over 400 Japanese firms find that higher debt-asset ratio improves firm performance.This is consistent with Sanda et al (2005) in the case of Nigeria.Agrawal and Knoeber (1996) have however shown that the effect of leverage on firm performance can be techniquedependent.They find higher debt financing to be negatively related to firm performance in a single mechanism OLS regression, but this effect disappears in simultaneous equation estimation. Institutional shareholding are expected to influence the standard of corporate governance positively and thereby optimize stakeholder value (SEC-CAC, 2003; Gillan, 2001).Holmstrom and Kaplan (2005) note the doubling of large institutional investors' share of ownership of U.S. corporation, and according to them, "the large increase in the shareholding of institutional investors means that professional investors -who have strong incentives to generate greater stock returns and are presumably more sophisticated own an increasing large fraction of U.S. Corporation".This view is also confirmed in Chidambaran et al (2007) where a direct relationship is established between institutional shareholding and performance.However, Edwards and Hubbard (2005) find that despite the very substantial growth of institutional ownership of U.S. Corporations in the past 20 years, there is little evidence that they acquire the kind of concentrated ownership positions required to be able to play a dominant role in corporate governance process.In Nigeria institutional investors account for 17.4% of shareholding (Adelegan, 2007a) A link between block holding/ownership concentration and firm performance has been established.Blockholding refers to the proportion of a firms shares owned by a given number of the largest shareholders.A satisfactory measure of ownership structure as a means of indicating control structure must reflect the distribution of both shareholding and shareholders (Teriba et al, 1977).And a high concentration shares tends to create more pressure on managers to behave in ways that are valuemaximising (Sanda et al, 2005).A competing view in the literature suggests that concentrated ownership allows undue influence over management to secure benefits that are detrimental to minority stakeholders (Shleifer and Vishiny, 1997;and Teriba et al, 1977).Sakai and Asaoka (2003) document that an increase in the ratio of blockholders' shareholding improves firm performance in Japan for the period 1979-2001.Sanda et al (2005) also establish same in the Nigerian case.Other studies like Moustafa (2006) and Cremers and Nair (2003) have similar arguments.On the other hand, Ashbaugh-Skaife and Collins (2005) find firms credit ratings to be negatively associated with the number of blockholders that own at least 5% ownership in the firm, while Demsetz and Lehn (1985) find no relationship between ownership concentration and accounting profit rates.Ownership Concentration is high in Nigeria (Adenikinju and Ayonrinde, 2001), the largest shareholders own an average of 32.65% equity, and an average of 13.42% of equity is owned by directors (Sanda et al, 2005). The proportion of outside directors sitting on the board of a firm (board independence) has been proposed to aid firm value.This is based on the arguments that independence is the cornerstone of accountability (CalPERS, 2007), and directors who are independent of the management strive at Board size and firm performance have been correlated.For instance, it has been found that the smaller the board size, the more efficient it is expected to be (Adelegan, 2007a).Some There is a relationship between directors' shareholding/compensation and firm performance.A well-designed compensation programme should serve to align the interests of executives and employees with those of shareholders (Gillan, 2001).In subjecting this to empirical validation, Brown and Caylor (2004) find that executive and director compensation is highly associated with good performance, Ashbaugh-Skaife and Collins (2005) find directors shareholding as aiding firm credit ratings.Lee et al (2005) establish a positive relationship between executive pay dispersion and firm performance, while Fich and Shivdasan; (2004) using a fixed-effects model that accounts for selfselectivity bias, find that firms with outside director stock option plans have significantly higher market to book ratios and profitability metrics.However, director shareholding is found to be negatively related to performance in Sanda et al (2005) Another relationship observed in the literature is Frequency of Board Meeting and Firm Performance.Frequent board meeting with sufficient notices is crucial in maintaining effective control over the company and monitoring the executive and management (SEC-CAC, 2003).Chidambaran et al (2007) however find firm performance to be independent of number of board meeting. The last five rows of table 3 below summarise the few empirical studies in this area for the developing economy of Nigeria.A cursory look depicts conflicting evidence.In this present study therefore, we try to offer recent evidence for the Nigerian case.Source: Author's investigation and compilation Theoretical Framework Corporate governance encompasses several issues and dimensions of firms which makes applicable a number of theories and their variants.The neoclassical Theory of the Firm as the traditional theory is erected on the assumption of the firm as an operating unit set out to maximise profit subject to the constraints imposed by the costs.The theory postulates that once firms continue to substitute cheaper inputs for expensive ones until the ratios of their marginal productivities to their prices are equalised and the bordered Hessian determinant is greater than zero, a firm automatically satisfies its objective function of profit-maximisation, which according to this theory, is the sole objective firms seek to optimise.In the Stakeholders Theory, authors like by Freeman (1984), Donaldson and Preston (1995), Frooman (1999), Hill and Jones (1992), and Phillips (2004) have proposed that the interest of other constituencies are equally important and therefore, managers should make decisions that take account of the interests of all the stakeholders in a firm. In his effort to show that the stakeholder theory is never a legitimate contender to value maximisation, Jensen (2000Jensen ( , 2001) ) propounded the Enlightened Stakeholder Theory which argues that value maximization provide corporate managers with a single objective whereas stakeholder theory directs corporate managers to serve 'many masters'.Moreover, without the clarity of mission provided by a single-valued objectives function, companies embracing stakeholder theory will experience managerial confusion, conflict, inefficiency and perhaps even competitive failure. The Agency Theory also known as the Principal-Agent problem deals with the conflict that ensue as a result of the arrangement called firm.It refers to the variety of ways in which agents, linked by contractual arrangements with a firm, influence its behaviour.These may include organizational and capital structure, remuneration policies, accounting techniques and attitudes toward risk-taking.Agency costs are deemed the total cost of administering and enforcing these arrangements (Jensen and Meckling, 1976). Agency theory explains how best to organize relationships in which one party (the principal) determines the work, which another party (the agent) undertakes (Eisenhardt, 1985).The theory argues that under conditions of incomplete information and uncertainty, which characterize most business settings, the two agency problems of adverse selection and moral hazard arises.The Multi-Task Principal-Agent Model by Holmstrom and Milgrom (1991) builds on the traditional agency theory.The multitask Principal Agent theory utilizes a linear principal-agent model which shows that an increase in an agent's compensation in any one task will cause some reallocation of attention away from other tasks.Another principal-agency problem arises in the form of Free Cash Flow.This is cash flow in excess of that required to fund all projects that have positive net present values when discounted at the relevant cost of capital.The problem is how to motivate managers to disgorge the cash rather than investing it at below the cost of capital or wasting it on organisational inefficiencies (Jensen, 1986).This version premises on assumption that managers have incentives to cause their firms to grow beyond the optimal size, since this raise their power and compensation.It therefore tries to identify firms activities that are likely to reduce the agency costs associated with free cash flow.Aghion and Bolton (1992) in their seminal paper extended the agency theory to the area of capital structure based on transactions costs and contractual incompleteness -Incomplete Contract.The main concerns of the theories are, first, whether and how the initial contract can be structured in such a way as to bring about a perfect coincidence of objectives between the entrepreneur (manager) and the investor.Second, when the initial contract cannot achieve this coincidence of objectives, how the control right can be allocated. Theoretically, this work premises on the Agency Theory as discussed above.The choice is based on the fact that this theory, more than any other one, highlights and attempts to solve the major conflicts that ensue as a result of the arrangement called firm.Further, its treatment of debt and equity financing makes it most suitable for studying quoted companies' governance and performance structures.The focal input of this theory is the formal proof that the less the fractional ownership of a manager is in a corporation, the more he tends to appropriate larger amounts of the corporation resources in the form of perquisites and the more desirable for the minority shareholders to expend more resources in monitoring his behaviour (see Jensen and Meckling, 1976).Hence, corporate governance advocates factors like high directors' shareholding and stock options as aids to the first point above, while optimum board size, blockholding, institutional shareholding, leveraging, independent directors and audit members and the separation of the position of chairman and CEO are factors that make possible effective monitoring. 4.2. Methodology of the study Where it y is the dependent variable and it x and t β are non-constant regressors and parameters for i = 1,2, …, 18 cross-sectional units (industries).Each cross-section is observed for dated period t = 1,2, …,5.Equation (1) which is explicitly specified in equations ( 2) and ( 3) is therefore estimated for each of the measures of performance by fixed effect and random effect regression techniques.Panel A and B of table 4 below depict the variables used in this study along with their definitions and measurements.Preceding the explicit specifications, three important factors from the literatures are considered, these are; i. Cross-sectional effects: There arises the need to take heterogeneity explicitly into account by allowing for industry-specific variables since the degree of influence of corporate governance may vary across industries (Gujarati, 2003;Adeninkinju, 2005;Fich and Shivdasani, 2004). ii.Control variables: Usually in studies of this nature, the variable firm size is controlled for (Sanda, et (3) See the sub-section on method of analysis below on why error term u changes to ω under the random effect specification.The above specifications are estimated for each of our four measures of performance, thus, in all we have eight different estimations. Method of Analysis/Estimation The descriptive analyses in terms of trends and structures of corporate governance and performances of the sample are first presented for the study period.Since industrial specific effect is expected, equations ( 2) and ( 3) are estimated in panel data models (the fixed and random effects models) while the better of the two is decided upon by the Hausman specification tests, heteroscedasticity-consistent estimators are also provided.The Fixed-effect estimator allows it α in (1) to differ across industrial units by estimating different constant for each industry.This is done by subtracting the "within" mean from each variable and estimating OLS using transformed data. 4.2.3 Study Scope and Data sources This study utilises data from 64 firms listed on the First-tier Securities Market of the Nigerian Stock Exchange, since this set of firms are under obligations to publish some essential information in their Annual Reports and Accounts.Excluded are the financial firms based on their different debt structure that are not comparable to that of other sectors ( Empirical Analysis This section presents descriptive statistics both on indicators of corporate performance and governance.Also examined are the correlations between corporate performance and governance indicators.Thereafter, the analysis on the impact of corporate governance on performance is carried out.24).Looking within the industries, the petroleum and breweries sectors have the highest value (Tobin's Q) of 3.12 (2.84) and 2.95 (2.42) respectively, while the commercial services and machinery are observed to have the least value of 0.60 (0.51) and 0.63 (0.65) respectively (note that their small sample size may have implications on these values).In terms of Return on Assets, the top performers are the breweries, N12.67 (N12.75) and Food, beverage and Tobacco, N10.27 (N8.03).However, during this study period, industries like computer and construction recorded negative (least) return on assets, which amounted to a mean (median) of -N8.94 (-N0.79)and, -N4.42 (N1.26) respectively. Measures of Corporate Governance Table 6 depicts Nigerian listed firms as having a 9member board on the average.The mean (median) outsider on board is 38% (37.5%), director shareholding is 9.79% (1.29%), and ownership concentration has a mean (median) value of 55.19% (50.00%).Also depicted is that firms are levered to the tune of 4.83% (1.63%) of their share capital.Majority (97.3%) of CEOs are not the chairs of their firms while the average size of firms is N14.3b with a median of N3.24b.By industry, the mean (median) board size is least in the computer sector (6 members) and highest in breweries (12 members).The machinery, textiles, petroleum and breweries have the highest percentages of independent directors of 77.78% (77.78%), 58.78% (60.00%), 56.91% (54.70%) and 52.27% (56.09%) respectively, while commercial services and real estate score the least percentage of 6.44% (10%) and 9.52% (14.29%) respectively.Directors' shareholding is highest in commercial services, 38.55% (50.93%) and least in the real estate sector, 0.34% (0.32%).Ownership concentration on the other hand, primes in the real estate, 98% (98%) and lowest in the publishing industry, 18.43% (11.22%). The percentages of independent audit members among the industries cluster around 50%, with textile having the highest of 62.38% (66.67%) and Automobile and Tyre having the least value of 45.21% (50%).The most levered industry is breweries 14.99% (13.25%) and publishing is the least levered, 0.65% (0.54%).In terms of size, the breweries sector has the largest mean (median) value of N86.2b (N54.9b) and machinery least, N0.598b (N0.544b).Note: median values in parentheses.Source: Author's computations: underlying data are obtained from Companies' Annual Reports and NSE Factbook (various issues). From the foregoing, two important points are worth noting.First, most firms that do well in their governance issues can also be associated with high performance measures, in this class are the Breweries and Petroleum sectors thereby suggesting sectoral fixed-effect of governance on performance.However, in the second case, the trend patterns of most governance indicators are haphazard and inconsistent across sectors as there is absence of synchronization of governance issues.Thus, the question arises whether Nigerian listed firms strive at accomplishing the provisions of any code in the immediate years following the release of such a code. Correlation Analysis A preliminary analysis of the relationship between governance and performance indicators was conducted using the results of the Pearson Product Moment Correlation (PPMC) presented in Table 7 below. Board size is noted to be positively related with all performance indicators, however, significant association are found only in the case with Price-Earning Ratio.A priori, increase in board size at low level is expected to have a positive relationship with performance while at large board size level, a rise in the board size is expected to inversely associate with performance.Increasing board size has the tendency of diversifying the board for better performance.However, the negative and significant relationship of board size with adjusted TQ (by adjusted, we mean firm's performance minus industry median performance value) points to the fact that adjusting for some firms differentials may change the direction of relationship.The regression analyses in later sections would provide a clearer picture.Percentage of outsiders on board co-vary significantly with TQ and P-E but negatively with ROE and ROA, though insignificantly with the latter.Outsiders on board are expected to support unprejudiced decisions that are value-raising.Directors shareholding possess an inverse relationship with performance measures, this relationship is significant with TQ and P-E.This may be expected at some low level of directors' shareholding; however at higher levels of directors' shareholding, a direct relationship is expected.Therefore, a non-linear association is expected between Directors' shareholding and performance, this also holds for Board Size and ownership concentration. Concentration is observed to positively associate with performance, except for the case of ROE.The relationship with TQ are significant.Higher concentration of ownership is expected to aid performance as large holders pay close attention to the management of their high stakes Percentage of independent audit membership positively correlate with P-E, and ROE.Only the relationship with P -E is significant at 5% level.The relationship is negative (though insignificant) with other performance indicators as well as for almost all the adjusted values.Leverage significantly and positively correlate with performance indicators except ROE.Relationships with adjusted T-Q are significant at 1%.Higher gearing ratio is theoretically expected to enhance performance as creditors attempt to see to the shrewd utilization of advanced credits.Lastly, firm size is positively related with TQ, ROA and P-E , but negatively related with ROE and Adjusted values of TQ, ROA and ROE.However, of significance to note is that larger firms are more productive and they have higher price earning ratio. Regression Analysis To verify the impact of the industrial levels of corporate governance on firms' performance, Table 8 below presents the estimation results of equations ( 2) and (3).F-ratios of the eight different estimations as shown in Table 8 below indicate their significant prediction respectively.However, the TQ and PE models are noted to fit better than the other two, judging from the adj-R 2 .In addition, the Hausman specification test indicates the superiority of fixed effect modelling of the ROE models.However, for the TQ, ROA and P-E models, both the fixed & random effects specifications are statistically indifferent.In terms of significant impacts, directors shareholding is observed to exert a non-linear effects of the TQ model as the negative impact of directors' shareholding at lower level is reversed when it becomes substantial.Thereby suggesting more efficient monitoring roles of directors at their high stake of holdings.Highly levered firms exhibit lower TQ.This is unexpected as credit-giving institutions are supposed to aid performance through supervision of project and credit management.It is likely that the credit institutions in Nigeria fail in their effective monitoring and or such monitoring levels are too low to reflect in firm value.Although combining the roles of a firm chairman & CEO in one person is discouraged by the SEC and CBN codes on the basis that it is likely to adversely affect proper decision making, our findings differ as CEOs who are also the chairs of their firms report higher TQ.A probable explanation is that such CEOs effectively monitor the firms activities especially when they are significant shareholders. Using ROA as a measure of performance, the effect of board size meets our a priori expectation, as initial increase in the number of persons on the board of Nigerian companies raises ROA, however, beyond a certain point, increases in board size adversely impact on ROA.This is in consonance with Ncube (2006) observation that the larger the board, the more diversified is its capacity for effective monitoring, however, at a certain high level, a large board may distort the flow of quality communication, as also established by Sanda et al (2005) for Nigeria case.Further, the negative impact of outsiders on board may support Gillan (2001a) view that high-powered executives may influence part-time directors into creating a systematic bias towards the management.Increasing ownership concentration initially raises ROA but later reduces it.A likely explanation is that initially at higher concentration shares, pressures are mounted on managers in ways that are valuemaximizing (Sanda et al, 2005).However, at some high level of ownership concentration, undue influence may be created over management to secure benefits that are detrimental to the firm value (Shleifer and Vishiny, 1997;and Teriba et al, 1977).CEOs doubling as chairs aid performance, as already established under TQ models. In the P-E models, the effect of independent directors as given under ROA is still established.In addition, independent audit membership significantly aid P-E as the independent members of the audit committee have the tendency to aid in valuemaximising monitoring.The effect of these on P-E is size also observed to be size dependent. On ROE, board size exerts a non-linear effect on performance as already discussed under ROA, independent directors exert a negative impact and finally, debt is observed to boost ROE.This may that large creditors usually see to it that their funds are appropriately channelled.Moreover, firm, with the knowledge that they may still approach the creditors in the future, strive at prudence.This is consistent with the findings of Sakai and Asaoka (2003) and Sanda et al (2005). 5.5 Comparison of Findings of this current study and Previous Related Studies on Nigeria An attempt to see the value added of this study (in terms of its methodological approach and scope) is to compare the findings with those of previous related studies.Table 9 below summarises the findings of this present work vis-à-vis related studies in Nigeria for easy comparisons.An important point to note is that our results are most comparable with those of Sanda et al (2005) as Adenikinju (2005) does not consider the non-linearity effects of some governance mechanisms, which is a likely explanation for some differences in the results of these two previous studies.The major area of divergence between this study and Sanda et al ( 2005) is the effect of CEO status on performance.They found the process of separating the roles of CEO and Chair as valueenhancing.However, in the current study, our finding is contrary.A likely explanation is that in the immediate years after the release of the Code, firms which have their CEO also doubling as the chair of their boards, employ such status in effective monitoring, thereby leading to enhanced performance.Thus, it seems that the scope of the study, which covers some of the period of the release of the new code, is responsible for the difference observed between this current study and the previous.Generally, the size of boards sitting on Nigerian firms and ownership concentration impact on their performance in an expected non-linear mode.Higher independent directors and directors' portion of shares unexpectedly dampen performance, • CEO-Chairman duality • Independence of audit membership • Debt independent audit membership aid only the price earning ratio, while firms vesting both the roles of CEOs and chairs in the same individual perform better.Leverage is noted to boost return on equity but dampen firms Tobin's Q Source: Author's investigation and computation 6. Summary of findings, conclusion and policy lessons Firm governance has both been theoretically and empirically proven to aid performance.This work therefore joins others to verify this using recent data on Nigeria encompassing the era of the newly released code of corporate governance in the country.Our findings show that corporate governance issues are still rudimentary in Nigeria.However, despite a weakly-efficient capital market and regulatory bodies, several commendable efforts have been made at rejuvenating corporate governance. Our empirical findings on the other hand, show that elements of corporate governance, as used in works of this nature and also stated in the Code of Corporate Governance (2003) for Nigeria, somewhat impact on firm performance, though some in unexpected directions.Results also differ according to the measure of performance employed.Nonetheless, generally, the size of boards sitting on Nigerian firms and ownership concentration impact on their performance in an expected non-linear mode.Higher independent directors and directors' portion of shares unexpectedly dampen performance, independent audit membership aid only the price earning ratio, while firms vesting both the roles of CEOs and chairs in the same individual perform better.Leverage is noted to boost return on equity but dampen firms Tobin's Q.Further, our results are not dependent on the number of years a firm has been listed, however, the firms' sizes in some cases affect the nature of relationships between governance and performance. Having established the relevance of governance variables of governance variables to firm performance, we recommend the following. The optimization of board size and composition is desirable for performance especially in a setting like Nigeria with weak takeover market.This should be determined such that decision management and decision control are separated unless decision makers have a significant ownership stake in corporate cash flows.The board size of companies should be big enough to display a good spread of monitoring skills of the board and enhance its effectiveness.However, it should be small enough to allow quality communication within the board.In composition, independent board membership should be encouraged, as this enables directors to act without relying solely on initiatives from a management.Further, there should be periodic meetings, without management, of the independent directors and formal rules or guidelines establishing an independent relationship between the board and management enacted Appropriate incentive scheme tied to performance should be made to increase firm value through valueadding efforts.We suggest that: Boards can require that CEOs become substantial owners of company stocks.Salaries, bonuses and stock options can be designed to provide big rewards for superior performance and big penalties for poor performance. The threat of dismissal for poor performance can be made real.But this should be done carefully, lest the public lose confidence in the company.On the part of the managers, efforts should be concentrated on developing and executing a solid long-term business strategy, rather than slavishly focusing on accounting earnings. However, in designing such an incentive scheme, as pointed out in the literature, it should not be tied to near-term earnings growth since this encourages excessive risk taking as well as business decisions geared towards propping up earnings.Any system in which managers participate in annual profits but not losses can encourage excessive risk taking. Board members should equally be incentivized; however, such incentives should not make seemingly independent directors support risky investments that are likely to push up share prices, as this may be counterproductive.Lastly, shareholders should have a say in stock-option plans that have the potential to dilute their voting power and wealth. The mechanism of debt should be exploited by firms desirous of expansion as this aids monitoring process.Though debt also has its own costs, firms need determine their optimal debt-equity ratio in order to maximize returns from such activities.Firms should strive at incorporating governance measures that are value-enhancing.However, noting the diverse availability and direction of impacts of these measures, it is pertinent to harmonise them.For instance the benefits derivable from a good governance measure like increase in directors' shareholding can easily be lost to an indiscriminate expansion of board size. The regulatory authorities enact and see to the compliance of rules and regulations governing corporations.No doubt, relevant rules are enacted; however, this may not guarantee adoption.Thus, regulatory bodies should ensure that the current organizational architecture of the Nigerian listed companies engenders proper governance.We notice from our regression results that a sizeable number of estimations depict negative influence of board and audit membership independence on performance, this is unexpected, and we therefore urge the authorities to ensure that the boards of Nigerian firms are not expanded for political or other reasons. In line with the findings of CPZ (2007), our findings show that the relationship between governance, observable and unobservable firms characteristics and corporate performance, is intricate and may not be amenable to a sort on any single governance measure or firm characteristics.Therefore, the same policy prescription on corporate governance is likely to be sub-optimal.Finally, if not in regulation, perhaps in suasion, Nigerian firms should be made to disclose more governance issues in their annual reports for adequate evaluation by current and prospective investors and researchers. al, 2005; Adenikinju and Ayonrinde, 2001, Adelegan, 2007; Magbagbeola, 2006; Brown and Caylor, 2005; Core and Rusticus, 2005, etc).Conversely, several others have established the impotency of some corporate governance precepts (Demsetz and Lehn, 1985, Core et al, 2005; Adenikinju, 2005 and Chidambaran et al., 2007), hence yielding conflicting observations.This notwithstanding, works on corporate governance are still few in Nigeria.They are limited to the works of Teriba et al (1977); Oyejide and Soyibo (2001), Adenikinju and Ayonrinde (2001); Sanda et al (2005); Adenikinju (2005), Magbagbeola, (2005); and maximizing firm performance (MacAroy and Millstein 2005).Scholars like Gillian (2001a) have argued contrarily.Their point is that high-powered executives may possess more information with which they influence the independent directors so as to create a systematic bias toward management.In Ashbaugh-Skaife and Collins (2005), board independence is positively related to firm credit ratings, Chidambaran et al (2007) also establish a direct relationship between the number of outsider on the board and firm performance, Lee et al (2005) find that board independence strengthens the positive association between firm performance and pay dispersion.Magbagbeola (2005) confirms a positive and significant relationship between non-executive director and Nigerian banks return on assets.Conversely, Sanda et al (2005) and Adenikinju (2005) establish an insignificant relationship between firm performance and board independence in Nigeria, while in Agrawal and Knoeber (1996), more outsiders on the board is negatively related to performance.Adelegan (2007a) shows that shareholders are adequately represented on the boards of Nigerian listed firms, since 79% of board members are outsiders.Combining the roles of a firm chairman and CEO in one person (executive duality) is identified as an undue concentration of power which is likely to adversely affect proper decision making and firm performance (SEC-CAC, 2003; CBN, 2006).Sanda et al (2005) employing pooled Ordinary Least Squares regression analysis on panel data for the period of 1996 through 1999 for a sample of 93 firms listed on the Nigerian stock exchange find that separating the posts of CEO and Chair works in favour of the firm.Ashbaugh-Skaife and Collins (2005) assert that a reduction in the CEO power covary with firm credit ratings.These results are also confirmed in Moustafa (2006).For Nigeria, Adelegan (2007a) establishes that 92% of the boards of directors of quoted firms in Nigeria have chairman different from chief executive officer. studies have been able to confirm the above thesis (Kyereboah-Coleman and Biekpe, 2004; Sanda et al, 2005; Moustafa, 2006) while others (Magbagbeola, 2005; and Chidambaran et al, 2007) refute it.Adelegan (2007a) has found the average board size of Nigerian listed firms to be nine; this is still within the range recommended by SEC-CAC (2003) and close to Sanda et al (2005) which recommend a 10-member board for Nigerian listed firms. ) Adenikinju and Ayonrinde, 2001, Kyereboah-Coleman Biekpe, 2006).Another reason aiding their exclusion is the critical re-structuring the sector is currently undergoing.The study covers the period 2002 to 2006 (five financial years), which encompasses the years before and after the release in 2003 of the Code of Corporate Governance in Nigeria by Securities and Exchange Commission and Corporate Affairs Commission, thus allowing us to compare corporate governance and firm performance of Nigerian listed firms for the two periods. Table 1 . Measures of Market Size of the Nigerian Capital Market Table 1 depicts that the number of listed securities on the NSE increased during the study period from 258 in 2002 to 287 in 2005, and despite the delisting of 28 securities, including 21 banks (NSE, 2006), the number of securities increased to 293 in 2006 with a growth rate ranging between 2.71% in 2003 to 2.09% in 2006.Also presented is a market capitalisation of N763.9 billion in 2002 which increased overtime to N5,120.0 billion in 2006.This represents a growth rate of 77.9% in 2003 and 76.55% in 2006.On the other hand, the capitalisation ratio increased from 11.94% in 2002 to 28.33% in 2006. Table 2 . Measures of Liquidity Table 3 . Preceding Researches on Corporate Governance ROAGovernance index (G) which is the sum of one point for the existence of 24 unique provisions that restrict shareholders' rightsCorrelation and OLSFirms with weak shareholders rights exhibits significant operating underperformance 14 Johnson et al, (2005) 1500 large US firms for 1990-1998 T.Q and Long term abnormal returns Governance index (G) which is the sum of one point for the existence of 24 unique provisions that restrict shareholders' rights OLS No significant long-term abnormal returns based on governance for the 1990s but good governance is valued by investors Table 4 . Variables, Definitions and Measurements TQTobin's Q Market value of common equity plus book value of liabilities, divided by the book value of total assets.ROA Returns on AssetsNet profit as a percent of total assets PANEL A: DEPENDENT VARIABLE P-E Price-Earning Ratio Ratio of share price to earning per share Note that equations 2, and 3 are only specified to capture the industrial fixed-effect, the time fixedeffect is not considered here for two reasons.First, corporate governance indicators have been shown to be time invariant (GIM, 2003; Core et al, 2005 and Johnson et al, 2005).The Random effect on the other hand, assumes that the term it α is the sum of a common constant α and time-invariant variable u t Table 7 . Correlation between Measures of Governance and Performance Pearson r for adjusted values in parentheses and *, **, *** indicates significance at the 0.1, 0.05 and 0.01 respectively.Source: Author's computations: underlying data are obtained from Companies' Annual Reports and NSE Factbook (various issues). Table 8 . Regression Results of the Effect of Corporate Governance on Firm Performance Table 9 . Summary of findings of this current study Vis-à-vis Related Studies
2018-12-07T14:16:11.045Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "25e0f033d20f44b850f373c47b4d05c7922daa93", "oa_license": "CCBYNC", "oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=5457&hash=10caab6bc55df92d1b4f697f455bf906ee00aed4", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "25e0f033d20f44b850f373c47b4d05c7922daa93", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
6216649
pes2o/s2orc
v3-fos-license
Ill-Formed and Non-Standard Language Problems Prospects look good for making real improvements in Natural Language Processing systems with regard to dealing with unconventional inputs in a practical way. Research which is expected to have an influence on this progress as well as some predictions about accomplishments in both the short and long term are discussed. i. Intr~ductio~ Developing Natural Language Understanding systems which permit language in expected forms in anticipated environments having a well-defined semantics is in many ways a solved problem with today's technology. Unfortunately, few interesting situations in which Natural Language is useful live up to this description. Even a modicum of machine intelligence is not pcsslble, we believe, without continuing the pursuit for more sophisticated models which deal with such problems and which degrade gracefully (see Hayes and Reddy, 1979). Language as spoken (or typed) breaks the "rules". Every study substantiates this fact. Malhotra (1975) discovered this in his studies of live subjects in designing a system to support decision-making activities. An extensive investigation by Thompson (1980) provides further evidence that providing a grammar of "standard English" does not go far enough in meeting the prospective needs of the user. Studies by Fromkin an~ her co-workers (1980), likewise, provide new insights into the range of errors that can occur in the use of language in various situations. Studies of this sort are essential in identifying the nature of such non-standard usages. But more than merely anticipating user inputs is required. Grammaticality is a continuum phenomenon with many dimensions. So is intelligibility. In hearing language used in a strange way, we often pass off the variation as dialectic, or we might unconsciously correct an errorful utterance. Occasionally, we might not understand or even misunderstand. What are the rules (zetarules, etc.) under which we operate in doing this? Can introspection be trusted to provide the proper ~erspecCives? The results of at least one investigator argue against the use of intuitions in discovering these rules (Spencer, 1973). Computational linguists must continue to conduct studies and consider the results of studies conducted by others. ~. Persoective$ Several perspectives exist which may give insights on the problem. We present some of these, not to pretend to exhaustively summarize them, but to hopefully stimulate interest among researchers to pursue one or more of these views of what is needed. Certain telegraphic forms of language occur in situations where two or more speakers of different languages must communicate. A pidgin form of language develops which borrows features from each of the languages. Characteristically, it has limited vocabulary and lacks several grammatical devices (like number and gender, for example) and exhibits a reduced number of redundant features. This phenomenon can similarly he observed in some styles of man-machine dialogue. Once the user achieves some success in conversing with the machine, whether the conversation is being conducted in Natural Language or not, there is a tendency to continue to use those forms and words which were previously handled correctly. The result is a type of pidginization between the machine dialect and the user dialect which exhibits pidgin-like characteristics: limited vocabulary, limited use of some grammatical devices, etc. It is therefore reasonable to study these forms of language and to attempt to accomodate them in some natural way within our language models. Woods (1977) points out that the use of Natural Language: "... does not preclude the introduction of abbreviations and telegraphic shorthands for complex or high frequency concepts --the ability of natural English to accommodate such abbreviations is one of its strengths." (p.18) Specialized sublanguages can often be identified which enhance the quality of the communication and prove to be quite convenient especially to frequent users. Conjunction is an extremely common and yet poorly understood phenomenon. The wide variety of ways in which sentence fragments may be joined argues against any approach which attempts to account for conjunction within the same set of rules used in processing other sentences. Also, constituents being joined are often fragments, rather than complete sentences, and, therefore, any serious attempt to address the problem of con-Junction must necessarily investigate ellipsis as well. Since conjunction-handling involves ellipsis-handling, techniques which treat nonstandard linguistic forms must explicate both. ~. Technicues What approaches work well in such situtations? Once a non-standard language form has been identified, the rules of the language processing component could simply be expanded to accomodate that new form. But that approach has limitations and misses the general phenomenon in most cases. Dejong (1979) demonstrated that wire service stories could be "skimmed" for prescribed concepts without much regard to gramn~aticality or acceptability issues. Instead, as long as coherency existed among the individual concepts, the overall content of the story could be summarized. The whole problem of addressing what to do with nonstandard inputs was finessed because of the context. Techniques based on meta-rules have been explored by various researchers. Kwasny (1980) investigated specialized techniques for dealing with cooccurrence violations, ellipsis~ and conjunction within an ATN gra~mlar. Sondheimer and Weischedel (1981) have generalized and refined this approach by making the meta-rules more explicit and by designing strategies which manipulate the rules of the grammar using meta-rules. Other systems have taken the approach that the user should play a major role in exercising choices about the interpretations proposed by the system. With such feedback to the user, no timeconsuming actions are performed without his approval. This approach works well in database retrieval tasks. A. Near and Long Ter~ Prospects In the short term, we must look to what we understand and know about the language phenomena and apply those techniques that appear promising. Non-standard language forms appear as errors in the expected processing paths. One of the functions of a style-checking program (for example the EPISTLE system by Miller et al., 1981) is to detect and, in some cases, correct certain types of errors made by the author of a document. Since such programs are expected to become more of a necessary part of any author support system, a great deal of research can be expected to be directed at that problem. A great deal of research which deals with errors in language inputs comes from attempts to process continuous speech (see, for example, Bates, 1976). The techniques associate with nonleft-to-right processing strategies should prove useful in narrowing the number of legal alternatives to be attempted when identifying and correcting some types of error. It is quite conceivable that an approach to this problem that parallels the work on speech understanding would be very fruitful. Note that this does not involve inventing new methods, but rather borrows from related studies. The primary impediment, at the moment, to this approach, as with some of the other approaches mentioned, is the time involved in considering viable alternatives. As these problems are reduced over the next few years, I feel that we should see Natural Language systems with greatly improved communication abilities. In the long term, some form of language learning capability will be critical. Both rules and meta-rules will need to be modifiable. The system behavior will need to improve and adapt to the user over time. User models of style and preferred forms as well as common mistakes will be developed as a necessary part of such systems. As speed increases, more opportunity will be available for creative architectures such as was seen in the speech projects, but which still respond within a reasonable time frame. Finally, formal studies of user responses will need to be conducted in an ongoing fashion to assure that the systems we build conform to user needs.
2014-07-01T00:00:00.000Z
1982-06-16T00:00:00.000
{ "year": 1982, "sha1": "7f5fa55a6b3910d0ea87e51410c1a20d72df5e43", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=981296&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "7f5fa55a6b3910d0ea87e51410c1a20d72df5e43", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
246609035
pes2o/s2orc
v3-fos-license
Understanding Compliant Behavior During a Pandemic: Contribution From the Perspective of Schema-Based Psychotherapy Objective The current study examined whether compliance with anti-pandemic measures during the COVID-19 pandemic relates to (a) importance of the fulfillment of core psychological needs, namely, relationship, self-esteem, efficacy, and pleasure; (b) coping behavior styles, namely, surrender, self-soothing, divert attention, and confrontation; and (c) worries or concerns beyond COVID-19 which may impair wellbeing. Methods This study used a cross-sectional design and online survey data from responses to a structured questionnaire developed within the theoretical framework of schema-based psychotherapy on psychological needs and coping behavior styles from 740 participants in Central Europe and West Africa. Results Analysis indicated that people with the psychological needs of “pleasure” and “efficacy” and the coping style of “surrender” were more likely to comply with anti-pandemic measures. We also found that people with the coping style of “confrontation” were less likely to comply. There were no statistically significant relationships between compliance and “relationship,” “self-esteem,” “self-soothing,” “divert attention,” and “existential concerns.” Discussion Our findings indicate that how likely a given individual is to comply with prescribed pandemic countermeasures varies based on their specific psychological needs and behavior styles. Therefore, to control contagion during a pandemic, authorities must recognize the relevance of human need fulfillment and their behavior styles and accordingly highlight and encourage admissible and feasible actions. The findings demonstrate that some individual differences in core psychological needs and coping behavior patterns predict compliance behavior. INTRODUCTION The COVID-19 pandemic has become an unprecedented global threat. The Emergency Committee of the World Health Organization (2020a) declared it a public health emergency of international concern on January 30, 2020, and a pandemic on March 11, 2020. The virus that causes COVID-19, scientifically named SARS-CoV-2, is highly contagious, and the risk of transmission particularly depends on individual and collective behavior. While the disease may be mild for most patients, the risk of hospitalization and mortality increases with age and some underlying conditions (Robert Koch Institut, 2021); the surge of infections and congestion of intensive care units by severe cases have encumbered healthcare infrastructure worldwide. To combat the pandemic, the World Health Organization (2020b) and the disease control organizations of national governments have recommended a series of precautionary measures, including restrictions on travel, public gatherings, in-school teaching, and face-to-face interactions. Several of these measures were unusual prior to pandemic and are fiercely debated both in public and private as they have raised important questions about the meaning of life when basic needs are not met; the intrusive nature of preventive policies with their potential psychological and social consequences; and the risk of creating a new normal with restricted human rights and liberties, among other controversies. In this study, we sought to understand compliant behavior during the pandemic in the light of schema-based psychotherapy with regards to (a) the fulfillment of core psychological needs (CPNs) and (b) coping behavior styles (CBSs). Both concepts are pillars of schema-based psychotherapy. Further, we sought to understand these in the context of people's concerns beyond the pandemic, as potential impediments to wellbeing, analog to the context of concepts of schema-based therapy. As the pandemic may not be the only present threat to wellbeing, we viewed it reasonable to investigate the effects of diverse concerns people have on compliance beyond the pandemic. This study was motivated by the controversies mentioned above and the fact that pandemics may occur more frequently in the future (cf. United Nations, 2020). On the one hand, the anti-pandemic directives tend to facilitate survival, and humans by nature strive to survive. On the other hand, some people comply while others do not. Hence, we are furthermore motivated to understand the reason. From a therapeutic point of view, we infer that compliance can also have a negative connotation when clients should thereby be constrained to grievously suppress their own needs, which can lead to clinical disorders, psychological distress, or impairment of wellbeing. In addition, although there have been several studies on individual coping strategies during the pandemic, to the best of our knowledge, there is no study aimed at understanding compliant behavior during a pandemic from the perspective of schema-based psychotherapy. THEORETICAL FRAMEWORK Schema-based psychotherapy (also known as schema therapy) is becoming increasingly popular both among psychotherapy researchers and practitioners. There are two independently developed influential traditions. One tradition particularly conceptualized as consistency-schema theory is given by Grawe (2000Grawe ( , 2004. The other tradition, conceptualized as schemamode-model, is given by Young et al. (2007). Core Psychological Needs According to the consistency theory (Grawe, 2000;cf. Fries and Grawe, 2006;cf. Grosse-Holtforth et al., 2008), the striving for consistency of psychic processes is a superordinate principle of psychic functioning. Among others, the core teachings of the consistency theory model are that humans strive for the equilibrium of gratification of the basic psychological needs and that incongruence (a significant form of inconsistency) is a major cause of the development and maintenance of psychopathological symptoms and poor wellbeing. In this theory, Grawe developed the concept of motivational schemata, where he differentiates between "avoidance motivational goals" (which are defined as mental representations of undesired transactions with the environment) as opposed to "approach motivational goals" (which are representations of desired transactions). The function of approach motivational goals is to ensure that basic needs are satisfied, while avoidance motivational goals serve to protect the individual from repetition of aversive experiences. However, if the avoidance schema dominates an individual's life, what originally had the function of protecting the individual's needs (e.g., being separated from others and being criticized) can paradoxically hinder the satisfaction of these same needs. In general, the schemas, which have neurological imprinting, are viewed by Grawe as organized units of psychological regulation for the purpose of reduction of complexity through classification in patterns, according to which they thus govern behavior. In particular, a person's plan structure includes all the conscious and unconscious strategies developed throughout life to instrumentally fulfill one's needs. Thus, in vertical analysis (generally in behavioral therapy) or plan analysis (particularly in the consistency-schema theory) gratification of basic needs is at the topmost level; accordingly, these needs are the ultimate driving factors of human behavior (cf. Caspar et al., 2005;Caspar, 2009Caspar, , 2018. The doctrine of basic psychological needs teaches that certain requirements must be fulfilled to sustain a psychological healthy life beyond mere physical existence (Becker, 1995). In his consistency theory, Grawe (2004) proposed the importance of balance in the fulfillment of CPNs, which he regards as the highest desired value ("Sollwert") of psychological activity. He describes these basic psychological needs as the need for attachment, increasing self-esteem, orientation and control, and gaining pleasure and avoiding displeasure. He views them as pervasive, in that they permeate all mental events. The innate desired value of the need described in psychology as bonding, connection, or connectedness is stated as the basic need for "relationship. " Grawe's (2004) need for "self-esteem" is often misunderstood as the need for permanently elevating one's self-worth ("Selbstwerterhöhung"). However, it refers to the innate need for an elevated self-value (Offurum, 2019 and self-determination. The need for orientation and control should be broadly understood as the innate desired value of the need to have "freedom of action, " self-efficacy, and locus of control or actionability, and includes the need for performance and achievement (Offurum, 2019(Offurum, , 2021. This need is referred to as "efficacy, " "handling, " or "actionability" in the current study. The basic need for "pleasure" includes the need for enjoyment, pleasurable experiences, play, fun, relaxation, ease, and esthetics. Grawe emphasizes that the underlying concepts, not the names, are decisive; thus, diversion in terms has existed and may still exist in psychology. To our knowledge, there is no questionnaire in schema-based therapy to examine basic psychological needs during a pandemic. Our items for this study were thus formulated based on preliminary interviews in this field (Offurum, 2021). Coping Behavior Styles Young's schema approach (Young et al., 2007;cf. Lobbestael et al., 2007;cf. Roediger, 2011) is based, among other concepts, on the concept of the early maladaptive schemas (EMS), maladaptive coping styles and responses, and the mode model. The early maladaptive schemas are emotionally anchored unconscious maladaptive self-defeating core cognitive patterns that an individual develops during childhood, which are elaborated throughout one's lifetime. Young has presented 18 EMSs like abandonment, mistrust/abuse, enmeshment, grandiosity, hypercriticalness, and emotional deprivation (neglect). One could view these as person's sore spots, which seem compatible with the avoidance schemas in Grawe's concept. In Young's theory, behavior is embedded in the three coping styles that form the second main feature in his model. Therein, individuals develop dysfunctional coping styles in order to cope with challenges when their schemas are triggered. These styles are maladaptive in his concept because, although initially they were strategies in coping with painful experience in childhood, paradoxically they are later applied at inadequate situations, thereby contributing to reinforcement and perpetuation of the maladaptive schemas. In psychology, the transactional model of coping with psychological stress is well established (Folkman et al., 1986). If coping patterns are applied across situations and maintained over a long time, situational (reaction) states can transcend time and situations to become personality traits. Therefore, coping can be seen as both a situational and a trans-situational response to challenges. It can be studied from different perspectives, such as personality disposition (habitual pattern or schema), situational ego-state (mode), or systemic (transactional dynamic; cf. Rexrode et al., 2008). In psychotherapy, current conceptualizations of coping correspond to the theory of ad hoc coping strategies (Horney, 1992) in psychoanalysis, according to which humans have three ad hoc strategies to cope with the world at their disposal. The first is the strategy of moving toward people. Here, the person exhibits consent or approval and considers others but neglects themself (Smith, 2007). In systemic psychotherapy, Satir (1988) calls this the placating communication style. The current study uses the term subjugation. Horney's second strategy, moving against others, emphasizes hostility and aggression. Here, life is considered a struggle and the individual exhibits the coping strategy of fighting, corresponding to Satir's (1988) blaming communication style. The individual primarily considers themself while neglecting others in the third strategy, moving away, the individual flees, separating themself and potentially becoming neurotically detached from others, and preventing anyone or anything from touching or mattering to them (Smith, 2007). Horney does not view these ad hoc strategies as invariably maladaptive. Schema therapy, as influenced by Young et al. (2007), emphasizes the maladaptive nature of coping patterns and contains three methods for adapting to one's schema. The first strategy is known as "surrender" (in German, the term Erduldung, or "endurance," is preferred; see Roediger, 2011). The second style is termed "confrontation" or "overcompensation, " and the third is called "avoidance. " A bifocal approach is useful to examine these maladaptive coping behavior styles. The primary perspective defines them as behavioral strategies vis-à-vis the overwhelming feeling of psychological distress by maladaptive schemas. In the secondary perspective, however, they are also used to explain behavioral maneuvers vis-à-vis the counterparts that trigger the maladaptive schemas. Regardless of perspectives, the styles are regarded as behavioral responses to the schemas from which they differ. Further, while "surrender" ("subjugation" or "placating") and "confrontation ("fight" or "counter attack") exhibit proximity, avoidance can be seen in a passive and active manner: active in the sense of fleeing, or diverting attention (rationalizer or distractor style), and passive in the sense of pacifying or self-soothing. The categorization may seem ambiguous, as some authors (e.g., Atkinson, 2012;Faßbinder et al., 2016) write that freezing belongs to the same category as surrender/subjugation, while others (Roediger and Zarbock, 2015) categorize it under avoidance. Existing questionnaires explore coping behaviors from various theoretical perspectives. The Ways of Coping Questionnaire is based on Lazarus's cognitive theory of psychological stress and coping (Folkman et al., 1986). Similar to various adaptations (Sawang et al., 2010;Senol-Durak et al., 2011;Kolokotroni, 2014), the items used in our study are based on Folkman et al. 's (1986) concept, which have been incorporated within the framework of schema-oriented psychotherapy. Worry or Concerns Both Grawe's and Young's traditions of schema-based psychotherapy dwell in the context of impairment of wellbeing, here conceptualized as worry or concerns in a non-clinical context. Concern or worry may be attributed to cognitive-emotive preoccupation with uncertainty, anxiety, and apprehension about the future. Excessive and uncontrollable worry constitutes the main diagnostic criterion for generalized anxiety disorder. Pathological worry is experienced as emotionally distressing and impairing. Although uncomfortable and potentially detrimental to health, worry can have the advantage of helping people avoid or solve problems (Borkovec et al., 1983). With novel threats, as in the present pandemic, and when individuals do not feel in control of the risk, they are more concerned (Carlucci et al., 2020). The current study focuses on the concept of worry or concern, regardless of pathological status. We investigated whether concerns/ worry about issues beyond the pandemic affected compliance with pandemic-related restrictions by asking participants how worried they were about the following: (a) health issues, (b) crime and social insecurity, (c) setback at school or work, and (d) financial or economic problems. Compliance Compliance is the dependent variable in this study. In medical practice, compliance or adherence describes the degree to which a patient follows medical advice. In this research, compliance refers to the application of therapeutic suggestions, both for treatment and prevention. Compliance with pandemic-related recommendations can generally be compared with adherence to medical guidelines. However, while non-adherence to personal medical advice may have no legal repercussions, defiance of pandemic regulations may carry severe legal consequences because the success of these measures rests on the individual's compliance. Although governmental responses to the pandemic have varied, most are comparable in their severity, duration, and types (Ritchie et al., 2021). To investigate compliant behavior, we employed the World Health Organization's (2020b) recommendations, representing the cardinal guidelines imposed worldwide: regularly and thoroughly washing one's hands with soap and disinfecting them to eliminate germs and viruses; wearing a mask or face shield in public; avoiding close contact, particularly shaking hands and hugging; cleaning and disinfecting surfaces frequently, especially those which are regularly touched, such as door handles, faucets, and phone screens; reduction of public transport; avoiding crowds; and fostering one's immunity. At this stage of the pandemic, no vaccine was available, and because eradication of the virus is not possible, the main purpose of these prescriptions was to "flatten the curve" of its spread to prevent the congestion of intensive care units and prevent triage. There have been many studies on compliance during the present pandemic: Some studies (Brouard et al., 2020;Carlucci et al., 2020;Raude et al., 2020) discuss the socio-demographics of individuals with regard to their compliance. Others, like Blais et al. (2021) and Dinić and Bodroža (2021), have studied personality differences that may relate to compliance. Farias and Pilati (2021) studied political ideology, while some others (Plohl and Musil, 2021;Wright et al., 2021) studied trust in science, government, and/or medical professionals. Baloran (2020) and Wang et al. (2020) studied coping during this pandemic. Orgilés et al. (2021) and Donato et al. (2020) studied disturbance, worry, or concern. Eisenbeck et al. (2021) studied meaning-centered coping. While these studies are very informative, none examine behavior in the light of an (influential) psychological/psychotherapy concept like schema-based therapy; thus, the present study aims to fill this gap. Study Design Having developed the items for our study with care and in line with the theoretical framework mentioned above, we performed "face-to-face item-pretests. " These were performed via video telephony due to movement and travel restrictions. The interviewers were similar to the intended participants, being males or females who were not experts in the field of research. We chose five individuals from each area targeted for dissemination and provided them with the pretest questions. While completing the questionnaire, they were repeatedly requested to be very critical and to comment on anything that crossed their mind (simply think aloud approach); especially, where something seemed unclear or ambiguous. Notes of the testers' comments were taken, and further questions asked (verbal probing approach) to ensure, for example, that the questions were understood and answered in terms of the construct. The questionnaire was improved accordingly after each pretest followed by further rounds of pretests. This method of pretesting as provided by the applied software (SoSciSurvey, 2020) corresponds to the concept of cognitive interviews. In cognitive interviews, ensuring validity of the research instrument involves examining how respondents (a) understand the question, (b) retrieve relevant information, (c) judge their answer, and (d) assign their response into the questionnaire (Ryan et al., 2012). The goal was therefore to utilize the information during the various pretests to improve the quality of the questionnaire, and thus, the quality of responses. Cognitive interviews were primarily developed to test each question in a questionnaire but not to check the technical functionality of a questionnaire. For this reason, it was supplemented with further "online pretests" (pretest without the researcher's presence). The pretest hyperlink was thus distributed to testers who accessed the questionnaire without the involvement of the researchers. They were requested to leave comments about the questionnaire in the test-comment area provided by the software. The questionnaire was improved accordingly and as a result, the best qualified items were selected. Finally, additional tests of technical functionalities were performed using a PC, tablet, and smart phone with various browsers, prior to questionnaire administration. In September 2020, we launched the comprehensive crosssectional online survey. The questionnaire was provided on the survey platform SosciSurvey and, for security and data protection reasons, hosted on the server of the Sigmund Freud University, Vienna, Austria. The survey was conducted until January 2021. Participants The prospective participant had to be a literate individual aged 18 years or above who had a PC or smartphone and Internet access at the time. Individuals under 18 years were explicitly excluded, while those who were not literate or lacked Internet access or access to a PC or smartphone were de facto excluded. Our objective was to recruit participants from Central Europe, West Africa, and America, along with the help of our research colleagues in those regions. However, due to unforeseen circumstances, the colleague in America was unable to disseminate the questionnaires. In the other locations, the the invitation with link to participate was disseminated, particularly via the administration offices of educational institutions, to all members of the institution (not just students). We additionally disseminated directly to our students or clients during lectures. The invitation contained a request to forward the link to friends or colleagues who met the inclusion criteria. The research was conducted in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki, 1964) and with the approval of the Ethics Commission of the Institut für Verhaltenstherapie (Institute for Cognitive Behavior Psychotherapy Training and Research), AVM, Salzburg, Austria. Participants were informed of the study's purpose and procedure, guarantee of anonymity and data protection, and voluntariness of participation, and informed consent was obtained from all respondents prior to participation. To answer our research questions, we extracted the relevant sections from the comprehensive survey. The questionnaire was structured such that no missing items were allowed. After data cleaning (Leiner, 2019), 740 responses were analyzed. Regarding the COVID-19 status in the countries where major participants were situated, Nigeria and Austria were in their second wave of infections, and Germany was in its third (or extended second) wave. Nigeria restricted public gatherings between 10 and 100 people and workplace closures were recommended. In Austria and Germany, public gatherings were limited to less than 10 people and workplace closures were required, except for essential workers. In all three countries, school closures were mandatory, and face covering policies were implemented in all public spaces. According to a stringency index where 100% denotes the most strict protocols, Nigeria, Austria, and Germany scored 58.33, 82.41, and 82.41%, respectively, at the time. The nine metrics used to calculate the Stringency Index are school closures, workplace closures, cancelation of public events, restrictions on public gatherings, closures of public transport, stay-at-home requirements, public information campaigns, restrictions on internal movements, and international travel controls (Ritchie et al., 2021). Instrument, Procedure, and Preliminary Data Analyses The Statistical Package for Social Science was employed for data analyses. To elicit meaningful and valid meta-scales based on the items, and to empirically verify our hypotheses, we conducted exploratory factor analyses (EFAs) of (1) compliance with the anti-pandemic recommendations, (2) the CPNs, (3) the CBSs, and (4) concerns. The EFAs were conducted to establish unidimensional scales, ensuring that an individual scale is not influenced by confounding factors, thereby obtaining valid measurements of the underlying concepts. These preliminary analyses were performed according to the following unitary pattern: (a) presenting the items and checking the suitability of the data as per Bartlett's and Kaiser-Meyer-Olkin (KMO) tests; (b) factor analyses using the scree plot and rotated component matrix to determine the optimal number of factors, followed by the interpretations; and (c) defining suitable terms for each factor. EFA of Compliance We applied the World Health Organization (2020b) pandemic prevention guidelines at the time to investigate compliant behavior. On a five-point Likert scale (1 = never; 5 = always), we rated participants' compliance with the following items: "I Frontiers in Psychology | www.frontiersin.org 6 February 2022 | Volume 13 | Article 805987 cleanse my hands with soap and water and/or use hand-sanitizer more regularly, " "I wear a mask or face shield at public premises to protect myself and/or others, " "I avoid shaking hands or hugging people, " "I clean or disinfect surfaces I might touch more often, " "I have stopped or reduced traveling by public transport, " "I avoid group events or crowded places, " and "I am trying to boost my immunity (e.g., with vitamins, healthy food, sports)" (Supplementary Table 2). To check the suitability of the data for the EFA, the KMO criterion and Bartlett's test of sphericity were calculated. Bartlett's test yielded p ≤ 0.001, indicating high significance. The KMO value was 0.84, ensuring highly suitability for the EFA. To verify the unidimensionality of compliance items, a scree plot analysis was conducted (Supplementary Figure 2). The scree plot showed that only one eigenvalue exceeds the Kaiser criterion of 1, thus confirming the unidimensionality of the compliance items being suitable to be combined to one scale. The unidimensional factor is termed "compliance. " EFA of Relevance of Fulfillment of CPNs On a five-point Likert scale (1 = not; 5 = very), participants rated the necessity for them to get their CPNs fulfilled during/despite the pandemic. The items used in this analysis are listed in Supplementary Table 3. To check the suitability of the data for the EFA, the KMO criterion and Bartlett's test of sphericity were employed. Bartlett's test yielded p ≤ 0.001, indicating high significance. The KMO value was 0.90, ensuring high suitability for the EFA. To elicit the optimal number of factors, a scree plot was drawn, and the Kaiser criterion was applied (Supplementary Figure 3). Based on the scree plot and Kaiser criterion, an eigenvalue of the factor above 1 indicated four to be the optimal number of factors. Thus, EFA was conducted using four factors. Further, to provide an intuitive interpretation of the analysis and the best possible separation among the four factors, a varimax rotation was applied. The EFA results (Supplementary Table 3) show that our items can be meaningfully grouped following the fundamental theory of CPNs in schema-oriented psychotherapy. These factors are named "efficacy, " "pleasure, " "relationship, " and "self-esteem. " Our empirical analysis thus supports the underlying structures of CPNs theorized above. In contrast to expectations, the items BV04_10, BV05_04, BV05_05, BV05_08, and BV05_10 did not follow the structure of previously theorized scales. To analyze the validity of the categorization of these items, different rotation and extraction methods were applied. Three (BV05_04, BV05_08, and BV05_10) of the items were unable to meet both theoretically and statistically meaningful groupings to justify their inclusion and were, therefore, dropped. Regarding the variance explained by the factors, each factor explains approximately the same amount, indicating a similar level of importance. EFA of CBSs On a five-point Likert scale (1 = never; 5 = always), participants rated how they dealt with the challenges of the present pandemic. The items used in this analysis are listed in Supplementary Table 4. With respect to coping based on behavioral style, Bartlett's test yielded p < 0.001, indicating high significance. The KMO value was 0.85, ensuring the suitability of the data for conducting an EFA. To elicit the optimal number of factors, a scree plot was drawn, and the Kaiser criterion was applied (Supplementary Figure 4). Based on the scree plot and Kaiser criterion, an eigenvalue of the factor above 1 indicated four to be the optimal number of factors. Thus, EFA was also conducted using four factors. To provide an intuitive interpretation of the analysis and the best possible separation among the four factors, a varimax rotation was applied. As shown by the results (Supplementary Table 4), the items can be meaningfully grouped and the factors established can be identified with the terms "self-soothing, " "confrontation, " "surrender, " and "divert attention. " However, two items (BV06_07 reflecting "escape" and BV07_09 reflecting "confrontation") were theoretically unsuitable to justify their inclusion in the group "self-soothing" and were thus dropped. EFA of Concerns In this analysis, the participants' concerns for health issues, crime and social insecurity, setback at school or work, and financial or economic problems rated on a five-point Likert scale (1 = not at all; 5 = very much) were evaluated. Bartlett's test yielded p < 0.001, indicating high significance. The KMO value was 0.705, ensuring the suitability of the data for conducting an EFA. The scree plot analysis (Supplementary Figure 5) showed the unidimensionality of concern items, thus suitable to be combined to one scale. The unidimensional factor is termed "existential concerns. " Overview of the Scales As shown in Table 1, descriptive parameters, such as means, standard deviations, skewness, and kurtosis of the scales, do not exhibit any irregularities. Generally, Cronbach values of approximately 0.65 are considered moderate but acceptable, mainly where small item-numbers are involved in exploratory research (Hinton et al., 2004;Hair et al., 2014). With values ranging from 0.65 to 0.83, the Cronbach's alphas of our scales, therefore, indicate acceptable to very good reliability. The corresponding Omega values are presented in the table. The table also shows that all significant intercorrelations have positive intercorrelations, as theoretically expected. Thus, good criterion validity is assumed. Multiple Linear Regression Suitability Analysis Multiple linear regression is a statistical method to estimate the relationship between several explanatory (independent) variables and one observed (dependent) variable. To provide valid results, the linear multiple regression is based on several statistical assumptions, such as linearity of the associations (multivariate), normality (data is symmetrically distributed with Frontiers in Psychology | www.frontiersin.org 7 February 2022 | Volume 13 | Article 805987 no skew), no multicollinearity, and homogeneity of variance (homoscedasticity of residuals). Linearity of the association was double-checked by partial added plots not indicating any better association than the (applied) linear one. The assumption of normally distributed residuals was visualized by a P-P plot indicating no violation of the assumption. Possible multicollinearity problems were double-checked by calculating the variance inflation factor (VIF). With VIF values of <2.81, no multicollinearity problems were identified. Minor heteroscedasticity issue was detected with the scatterplot between fitted and actual values. Accordingly, heteroscedasticity consistent standard error (Hayes and Cai, 2007) was applied to improve the model and to ensure the results' validity. Consequent to these preliminary analyses, the model's result can be presumed to be valid. Finally, covariates, such as age, gender, education, savings, and country, were included to control for socio-demographic variables. Test of Hypotheses: Effect of CPNs, CBSs, and Concerns on Compliance As shown in Table 2, the overall regression model achieved an R 2 of 0.232, with a significant value of p < 0.001, indicating the relevant effects measured within the model. With a significant value of p of 0.047 and a regression coefficient of 0.116, a positive effect of psychological need for "efficacy" was confirmed. Thus, the more essential that the basic psychological need is to handle (self-efficacy), the higher compliance can be expected. With a significant value of p < 0.001 and a regression coefficient of 0.193, the positive effect of psychological need for "pleasure" was verified. Thus, the more essential the fulfillment of the CPN for "pleasure" is during the pandemic, the higher the expected compliance with policies. With a significant value of p < 0.001 and a regression coefficient of −0.165, the negative effect of "confrontation" was verified. Thus, the higher the coping behavior style of confrontation during the pandemic, the lower the expected compliance with policies. With a significant value of p < 0.001 and a regression coefficient of 0.310, a positive effect of "surrender" was confirmed. Thus, the higher be coping behavior style of surrendering in dealing with the pandemic, the higher compliance can be expected. Core Psychological Needs This study aims to contribute to understanding behavior during the pandemic in the light of theories of schema-based therapy. Grawe's tradition teaches that topmost motivational factor of human behavior lies in the gratification of the basic needs, as demonstrated in plan analysis (Caspar, 2009(Caspar, , 2018. Accordingly, the individual's motivation to comply with measures during the pandemic would depend on their topmost "plan," and the topmost level of a person's plan structure consists of the psychological needs essential to them at the given time. These needs can be categorized into the core needs of (a) "relationship, " (b) "self-esteem/dignity/recognition/ self-determination," (c) "efficacy/handling/actionability," and (d) "pleasure/easiness/gaudium" (Grawe, 2004;Offurum, 2019). We therefore assessed the importance of the core psychological needs during the pandemic and tested the hypotheses that with the topmost motivation being the gratification of the CPN for "relationship" or "self-esteem, " individuals would not comply with the anti-pandemic policies, but with the topmost motivation being "efficacy" or "pleasure, " individuals would indeed comply with the anti-pandemic policies. These assumptions were based on the premises that, due to the nature of the pandemic (contagion through contact), the measures restrict contact, which is a core aspect of relationship and that, due to the nature of sanctioning, individuals whose topmost plan during the pandemic was self-esteem (dignity) were highly challenged. However, for those whose topmost plan toward the pandemic lies in the category of efficacy (actionability), the measures provide an opportunity to act; and individuals whose topmost plan during pandemic was to experience pleasure and avoid pain would comply in order to avoid the pain of the viral infection. Our results show that the four-factor categorization of basic needs according to the conceptualization of Grawe's tradition of schema-based therapy is adoptable. Further, our results show that the higher the importance of fulfillment of the CPN for efficacy, the higher compliance can be expected. This seems to demonstrate the driving factor of efficacy (actionability, control, or achievement of solution) amidst the threats of the pandemic. Perhaps, those whose topmost plan toward the pandemic lies in efficacy cannot endure being passive toward the threats. Our results also show the higher the importance of fulfillment of the CPN for pleasure, the higher compliance can be expected. This seems to suggest that restrictions on festivities must not have prevented individuals from having pleasure, or that avoiding pain, which is a core aspect of the CPN for pleasure, must be a driving force for compliance. However, we found no statistically significant relationship between the CPNs of relationship and self-esteem with compliance. Concerning relationship, this may be because the restrictions of human contact to curtail the pandemic may have particularly jeopardized the fulfillment of needs for relationship, but that relationship via digital technology must have partially compensated the deficit, however not enough for statistically significant positive effect on compliance. Concerning self-esteem, this may be because for these people, maintaining elevated self-esteem was indeed essential at the time, but unlike the factor "relationship, " self-esteem may not have been very much challenged by these restrictions, particularly not by those on human contact. However, the self-esteem of people must have still not been considered enough by authorities to provide for a statistically significant positive effect on compliance. Behavior Styles Young's tradition of schema-based therapy differentiates between the maladaptive coping styles and responses of "surrender, " "avoidance, " and "confrontation, " whereby avoidance is viewed in an active ("flight"/"divert attention") or passive ("pacifying"/"selfsoothing") way. We therefore accessed the coping styles of participants during the pandemic and tested the hypotheses that participants exhibiting the behavior styles of "confrontation" and "diversion of attention" would significantly express low compliance, while participants exhibiting the styles of "surrender" and "selfsoothing" would significantly express high compliance. Our results indicate that the four factors established could be meaningfully termed "surrender, " "self-soothing, " "divert attention, " and "confrontation, " and that the four-factor categorization of coping styles is adoptable for schema-based therapy. Although here the term "divert attention" best reflects the factor established by the present items, it still corresponds with the concept of "active avoidance" or "escape" used in Young's theory; as such, this grouping is principally in line with the fundamental theory of the coping styles in schema-oriented psychotherapy. Nevertheless, this observation may be useful for improvement of items in future research, as it may indicate a more complex underlying structure and may provide a step forward to resolve the ambiguity in conceptualization (cf. "freezing" by Atkinson, 2012;Faßbinder et al., 2016 versus Roediger andZarbock, 2015) as mentioned above. Further, our results show that the higher the coping style of "confrontation, " the less compliance can be expected, presenting a negative significant effect of "confrontation" on "compliance. " Our interpretation is that the coping style of "fighting against" in response to a threat is explained in psychology of motivation with the concept of "reactance" as the immediate response to restriction of freedom, particularly where the restrictions are not perceived as legitimate or justified and the restriction is not irrelevant (Graupmann et al., 2016). Furthermore, we did not find that either "self-soothing" or "divert attention" had a significant effect of on compliance. We consider that these passive and active forms of avoidance have no significant effect on compliance because they do not express proximity, which may be decisive for positive or negative compliance. While Karmakar et al. (2021) found self-soothing as a coping style during the present pandemic, Orgilés et al. (2021) found that avoidance-oriented styles were related to better psychological adaptation during the present pandemic. Further, our results show that the coping style of "surrender" correlates significantly with compliance, expressing a positive significant effect of "surrender" on "compliance. " This is not surprising, as the behavior style of succumbing-to or giving-in would imply abiding with sanctions. These findings seem in line with Blais et al. (2021), who found that "rule-followers" (cf. "CBS-surrender") and "deliberate planners" (cf. CPN-efficacy) exhibit greater compliance in social distancing than those who are callous and antagonistic in personality. Concerns Both traditions of schema-based therapy dwell in the context of personal distress and impairment of wellbeing, and within this study, they are conceptualized as worry or concerns in a non-clinical context. We therefore accessed participants' concerns during the pandemic in our model and additionally tested the hypothesis that the stronger participants' other existential concerns are, for example, with health issues, crime and social insecurity, setback at school or work, and financial or economic problems, the less compliant they are. This hypothesis could not be verified, showing that these existential issues do not show any statistically significant effect on whether individuals comply with anti-pandemic recommendations or not. Imbriano et al. (2021) likewise found no significant association of worry with compliance with health behaviors. Contribution to the Literature Our research holds practical value to the literature input. First, to the best of our knowledge, this is the first study to investigate compliance behavior during a pandemic in light of the fundamentals of schema-based psychotherapy. Second, we believe that our findings can be beneficial for citizens, policymakers, risk managers, researchers, and experts in human behavior and health as our research contributes to the understanding of the psychological aspect of behavior during a pandemic. Microbiological and epidemiological data, although valuable, cannot exclusively inform pandemic policy; holistic approaches require a more in-depth knowledge of human behavior. Finally, our work presents a preliminary step toward reconciling the two independently developed traditions in schemaoriented psychotherapy. Limitations It may be easy to endorse the finding that people tend to comply to anti-pandemic measures when they possess the coping style of submission. However, we venture to claim that compliance with pandemic measures does not necessarily signify subservience to authorities. Our findings do not demonstrate all motivations for compliance and non-compliance. There must be others: For instance, Dinić and Bodroža (2021) found that selfishness had negative effects on compliance with protective measures, and prosocial tendencies in general positively correlate with protective behaviors. Individuals may be non-compliant to demonstrate their disagreement with the authorities, or as an exhibition of power. It may even be an infantile act of defiance or influenced by peer pressure. Major barriers to compliance may include the complexity of the problem, the demands made by authorities and the steps to be taken, as well as misunderstanding the benefits of compliance. People could also fear side effects, be skeptical of costs, or feel suspicion backed by true experiences. Compliance may also be rooted in infantile servility, mental thralldom, or renunciation of responsibility. Further research is definitely needed. There are some particular limitations we ought to note: Our study relied on self-reported responses, which are influenced by respondents' imperfect memory or social desirability. Although data on personal needs and coping styles can primarily be selfreported, limitations inherent to self-report measures may affect results. Further, it was convenient for participation to be online. However, persons with Internet access might not represent the general population. Thus, our findings are to be interpreted with respect to context and limitations, and generalized with care. Above, we illustrated the socio-demographic characteristics of participants and presented the descriptive statistics of the age of participants, not based on an idealized symmetric distribution (biological age grouping in 10 or 20 years) but grouped in the social-generational groups. Social generations are viewed as cohorts, whereby a cohort is seen as people within a delineated population who experienced significant range of same life event within a given historical time (Pilcher, 1994). Beyond the sociological dimension, the concept of a social generation provides a psychological dimension in the sense of belonging and shared identity to understand a socio-demographic (Biggs, 2007). Sandeen (2008), with reference to Strauss and Howe, views it as a "peer personality" and suggests that social-generational groups act as very meaningful segmentation in research. With 74% of participants belonging to Generation Z and Y, generalization to other social age groups has to be with care. Though the scales of "self-esteem, " "confrontation, " "surrender, " and "diverted attention" are acceptable, slightly increasing the number of items would lead to a much better values for Cronbach's alpha. If one would aim at standardizing the items for pandemic questionnaire based on schema therapy, it would be advantageous for items to be improved in later research. Our conclusions should therefore be taken as incentives for further exploration. Finally, our study was limited to a certain stage of the pandemic. Different results could be expected in a longitudinal study. Outlook An extensive future study of the fundamental theories of schema-based psychotherapy during a pandemic may involve investigating whether clients' problems during pandemic are schema-driven (e.g., "Vulnerability to Harm/Illness schema" or "Negativity/Pessimism schema") or whether a schema that has been dormant in people can be activated by a pandemic (cf. Schema Therapy Bulletin 2020). On investigating the theory of schema-oriented therapy during a pandemic, we focused on general, cross-cultural, and cross-gender trends although controlling for the effect of country, gender, education, and savings. Future studies may look at similar phenomena independently for different groups, such as gender, marital status, nationality, ethnicity, and education. We believe science advances through benevolent criticism, counter-opinion, and suggestions for improvement. One would not generalize that all human behavior in all pandemics is exactly alike. Compliant behavior may also depend on other factors, for example, the kind or the novelty of the virus or its mutations, the policy details, and the duration of the pandemic. Similarly, compliance may depend on the individuals' fears, sources of and level of critical analysis of information, and trust in authorities. Therefore, we contend that these additional aspects call for consideration. Furthermore, developing a standardized questionnaire, particularly on the fulfillment of core psychological needs within the framework of schema-oriented psychotherapy, would be highly valuable for practitioners. We invite other researchers to this endeavor and believe that we have herewith provided a strong foundation. PRACTICAL IMPLICATIONS AND CONCLUSION A year has passed, and the world is still in the midst of the COVID-19 pandemic, which has led us to revisit one of psychology's fundamental questions, what determines human behavior, and to examine it in the light of a contemporary and influential theory: schema-based psychotherapy. Our findings present key insights that may (a) help effectively promote individual psychological consistency, (b) assist government's regard of that, and (c) therefore, foster collective health-responsible behaviors. First, these results teach that when drafting and communicating sanctions during a pandemic, the authorities should consider the driving force of behavior, i.e., the fulfillment of the CPNs. Authorities cannot limit their responsibility to promulgating restrictions that may impinge on fundamental human needs; to effectively control contagion during a pandemic, they must see the relevance of human need fulfillment. They should clearly highlight admissible and feasible actions that allow for the fulfillment of basic needs despite the context of a pandemic. Individuals are more prepared to comply when their topmost goals are taken into account and less prepared to comply when topmost goals are infringed. Accordingly, the authorities should emphasize the superordinate goal of citizens' efficacy and performance, and put power into citizen's hands. Likewise, they should emphasize the superordinate goal of increasing pleasure and reducing pain. Since restrictions can impair the fulfillment of the need for relationships amidst a pandemic, authorities should, for example, help build psychological supports for the population and provide and highlight alternative ways facilitate people finding their idiosyncratic strategies to meet, be acquainted with, relate to, and communicate, as well as develop and nourish relationships with others, in line with the anti-pandemic measures. For instance, to facilitate compliance among those who hold the CPN for relationship as a primary necessity during the pandemic, authorities need to emphasize that individuals may still socialize while sanitizing and limiting physical contact and that the pandemic restrictions ultimately aim at reinforcing possibilities for relationships in the long run and at reducing the chances of losing beloved ones. Since restrictions also challenge self-determination, with regards to self-esteem, authorities should address individuals' needs for honor, dignity, and autonomy, and make recommendations with due respect and without signs of defamation. Thus, in order to facilitate compliance among those who hold the CPN for self-esteem (dignity/autonomy), authorities must communicate with humility and respect, and authentically present themselves as being concerned about serving the people, without arrogance or demonstration of paternalism. They should also show gratitude for the valued contribution of compliance which the citizens eventually take autonomously. Further, individuals have different ways of coping with challenges. While those with the coping style of surrender may easily comply with sanctions, particular attention should be given to individuals with the style of confrontation; their cooperation is not to be taken for granted; and reactance or the rejection of coerced blessings is to be expected. With due understanding of the concept of the behavior style of overcompensation/ confrontation in schema therapy, there is need to make sure that reactance is not provoked, and when provoked that it is competently dealt with to the interest of the subject. The key contribution of the current study is the importance of emphasizing individual's superordinate goals (their basic needs) and behavior styles to support the imperative fulfillment of psychological needs, with respect to and for the CPNs that individuals are focused on at the particular time, rather than increasing compliance. The key message is that supporting people from the perspective of the psychological needs which they are focused on fulfilling in the given moment and in respect of their behavior styles may be more affective for facilitating compliance than forcing or exhorting people to change/behave. Therefore, the effectiveness of anti-pandemic measures depends on the extent to which individual's resources can be activated within their peculiarities. A practical application of motivation via core psychological needs is illustrated in the concept called the "motive-oriented psychotherapeutic relationship" (Grawe, 2004;Caspar et al., 2005) in the tradition of Grawe. Accordingly, a decisive factor for effective therapy is the extent to which specific measures address the abilities within patient's existing characteristics and subsequently activate their willingness to take action (Grawe and Grawe-Gerber, 1999). Dealing with reactance in therapy is illustrated in the concept of "empathic confrontation" in the tradition of Young and may be enrichened with concept of defense mechanisms in psychoanalysis. Defining precise mechanisms for political or medical authorities to achieve this during a pandemic is outside the purview of this article; for now, we allow them to grasp the importance of the concept. We hope this research can assist in fostering understanding and cooperation between compliant and non-compliant people for the common goal of survival in any future pandemics. It must also be noted that we do not assert that compliance or adherence to authority is an ethical or moral value per se (cf. Milgram Experiment). 1 Nevertheless, given the ongoing pandemic, and since compliance cannot be presumed, we believe that research that elucidates compliant behavior during a pandemic will aid in improving people's lives during these trying times. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Commission of the Institut für Verhaltenstherapie (Institute for Cognitive Behavior Psychotherapy Training and Research), AVM, Salzburg, Austria. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS CO conceptualized the study, designed the questionnaire, performed the primary data analysis, prepared the original draft of the manuscript, and contributed to the acquisition of funding. CO, ML, and BJ contributed to the improvement of theoretical and statistical model, analysis, and interpretation. All authors contributed to the manuscript revision, read, and approved the submitted version. FUNDING The open access publication of this article has been made possible thanks to the funding of the Department of Psychology, the Faculty of Psychology and Sports Science, as well as the office of the Vice Rectorate for Research, all of the University of Innsbruck.
2022-02-07T14:18:49.742Z
2022-02-07T00:00:00.000
{ "year": 2022, "sha1": "3ed57db06ce03e2d9f3aff45d5c1b176ef483aa3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2022.805987/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "3ed57db06ce03e2d9f3aff45d5c1b176ef483aa3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
11174206
pes2o/s2orc
v3-fos-license
Fasting and cancer treatment in humans: A case series report. Short-term fasting (48 hours) was shown to be effective in protecting normal cells and mice but not cancer cells against high dose chemotherapy, termed Differential Stress Resistance (DSR), but the feasibility and effect of fasting in cancer patients undergoing chemotherapy is unknown. Here we describe 10 cases in which patients diagnosed with a variety of malignancies had voluntarily fasted prior to (48-140 hours) and/or following (5-56 hours) chemotherapy. None of these patients, who received an average of 4 cycles of various chemotherapy drugs in combination with fasting, reported significant side effects caused by the fasting itself other than hunger and lightheadedness. Chemotherapy associated toxicity was graded according to the Common Terminology Criteria for Adverse Events (CTCAE) of the National Cancer Institute (NCI). The six patients who underwent chemotherapy with or without fasting reported a reduction in fatigue, weakness, and gastrointestinal side effects while fasting. In those patients whose cancer progression could be assessed, fasting did not prevent the chemotherapy-induced reduction of tumor volume or tumor markers. Although the 10 cases presented here suggest that fasting in combination with chemotherapy is feasible, safe, and has the potential to ameliorate side effects caused by chemotherapies, they are not meant to establish practice guidelines for patients undergoing chemotherapy. Only controlled-randomized clinical trials will determine the effect of fasting on clinical outcomes including quality of life and therapeutic index. INTRODUCTION Thus, reduction of undesired toxicity by selective protection of normal cells without compromising the killing of malignant cells represents a promising strategy to enhance cancer treatment. Calorie restriction (CR) is an effective and reproducible intervention for increasing life span, reducing oxidative damage, enhancing stress resistance and delaying/preventing aging and age-associated diseases such as cancer in various species, including mammals (mice, rats, and non-human primates) [5][6][7][8]. Recently, a fasting-based intervention capable of differentially protecting normal and cancer cells against high-dose chemotherapy in cell culture and in neuroblastoma-bearing mice was reported [9]. In the neuroblastoma xenograft model, mice were allowed to consume only water for 48 hours prior to etoposide treatment. Whereas high dose etoposide led to 50% lethality in ad libitum fed mice, fasting protected against the chemotoxicity without compromising the killing of neuroblastoma cells [9]. Previous human studies have shown that alternate day dietary restriction and short-term fasting (5 days) are well tolerated and safe [10][11][12]. In fact, children ranging from 6 months to 15 years of age were able to complete 14 to 40 hours of fasting in a clinical study carried out at the Children's hospital of Philadelphia [13]. Furthermore, alternate day calorie restriction caused clinical improvements and reduced markers of inflammation and oxidative stress in obese asthmatic patients [12,14]. Here, we report 10 cases of patients diagnosed with various types of cancer, who have voluntarily fasted prior to and following chemotherapy. The results presented here, which are based on self-assessed health outcomes (Table 1) and laboratory reports, suggest that fasting is safe and raise the possibility that it can reduce chemotherapy-associated side effects. However, only a randomized controlled clinical trial can establish its efficacy. RESULTS Ten cancer patients receiving chemotherapy, 7 females and 3 males with a median age of 61 years (range 44-78 yrs), are presented in this case series report. Four suffered from breast cancer, two from prostate cancer, and one each from ovarian, uterine, non small cell carcinoma of the lung, and esophageal adenocarcinoma. All patients voluntarily fasted for a total of 48 to 140 hours prior to and/or 5 to 56 hours following chemotherapy administered by their treating oncologists (Tables 2, 3). Figure 1. Self-reported side-effects after chemotherapy with or without fasting. Data represent average of CTCAE grade from matching fasting and non-fasting cycles (Ad Lib). 6 patients received either chemotherapy-alone or chemo-fasting treatments. Self-reported side effects from the closest two cycles were compared one another. Statistic analysis was performed only from matching cycles. Data presented as standard error of the mean (SEM). P value was calculated with unpaired, two tail t test. (*, P<0.05). www.impactaging.com Case 1 This is a 51-year-old Caucasian woman diagnosed with stage IIA breast cancer receiving adjuvant chemotherapy consisting of docetaxel (TAX) and cyclophosphamide (CTX). She fasted prior to her first chemotherapy administration. The fasting regimen consisted of a complete caloric deprivation for 140 hours prior and 40 hours after chemotherapy (180 hours total), during which she consumed only water and vitamins. The patient completed this prolonged fasting without major inconvenience and lost 7 pounds, which were recovered by the end of the treatment ( Figure 2H). After the fasting-chemotherapy cycle, the patient experienced mild fatigue, dry mouth and hiccups ( Figure 2I); nevertheless she was able to carry out her daily activities (working up to 12 hours a day). By contrast, in the subsequent second and third treatment, she received chemotherapy accompanied by a regular Figure 2I). This time the side effects forced her to withdraw from her regular work schedule. For the forth cycle, she opted to fast again, although with a different regimen which consisted of fasting 120 hours prior to and 24 hours post chemotherapy. Notably, her selfreported side effects were lower despite the expected cumulative toxicity from previous cycles. Total white blood cell (WBC) and absolute neutrophil counts (ANC) were slightly better at nadir when chemotherapy was preceded by fasting (Figure 2A, C; Table S1). Furthermore, platelets level decreased by 7-19% during cycles 2 and 3 (ad libitum diet) but did not drop during the first and forthcycles (fasting), ( Figure 2D). After the forthchemotherapy cycle combined with 144-hour fast her ANC, WBC, and platelet counts reached their highest level since the start of chemotherapy 80 days earlier ( Figure 2A, C and D). Case 2 This is a 68-year-old Caucasian male diagnosed in February 2008 with esophageal adenocarcinoma metastasic to the left adrenal gland. The initial treatment consisted of 5-fluorouracil (5-FU) combined with cisplatin (CDDP) concurrent with radiation for the first two cycles. Throughout these first two cycles, the patient experienced multiple side effects including severe weakness, fatigue, mucositis, vomits and grade 2-3 peripheral neuropathy ( Figure 3). During the third cycle, 5-FU administration was interrupted due to severe nausea and refractory vomiting ( Figure 3). In spite of the aggressive approach with chemotherapy and radiation, his disease progressed with new metastases to the right adrenal gland, lung nodules, left sacrum, and www.impactaging.com coracoid process documented by computed tomography -positron emission tomography (CT-PET) performed in August 2008. These prompted a change in his chemotherapy regimen for the fourth cycle to carboplatin (CBDCA) in combination with TAX and 5-FU (96 hour infusion) ( Table 2). During the fourth cycle, the patient incorporated a 72-hour fast prior to chemotherapy and continued the fast for 51 hours afterward, consuming only water. The rationale for the 51 hour post-chemotherapy fasting was to cover the period of continuous infusion of 5-FU. The patient lost approximately 7 pounds, of which 4 were regained during the first few days after resuming ad libitum diet (data not shown). Although a combination of three chemotherapeutic agents were used during this cycle, self-reported side effects were consistently less severe compared to cycles in which calories were consumed ad lib ( Figure 3). Prior to his fifth cycle the patient opted to fast again. Instead of receiving the 5-FU infusion for 96 hours, as he did previously, the same dose of the drug was administered within 48 hours, and the fasting regimen was also modified to 48 hours prior and 56 hours post chemotherapy delivery. Self-reported side effects were again less severe than those in association with ad libitum diet and the restaging CT-PET scan indicated objective tumor response, with decreased standard uptake values (SUV) in the esophageal mass, the adrenal gland metastases, and the lung nodule. From the sixth to eight cycle, the patient fasted prior to and following chemotherapy treatments (Table 2). Fasting was well tolerated in all cycles and chemotherapydependent side effects were reduced except for mild diarrhea and abdominal cramps that were developed during the seventh cycle ( Figure 3). Ultimately, the patient's disease progressed and the patient died in February 2009. www.impactaging.com Case 3 This is a 74-year-old Caucasian man who was diagnosed in July 2000 with stage II prostate adenocarcinoma, Gleason score 7 and baseline PSA level of 5.8 ng/ml. He achieved an undetectable PSA nadir after radical prostatectomy performed in September of 2000, but experienced biochemical recurrence in January 2003 when PSA rose to 1.4 ng/ml. Leuprolide acetate together with bicalutamide and finasteride were prescribed. However, administration of these drugs had to be suspended in April 2004 due to severe side effects related to testosterone deprivation. Additional therapies including triptorelin pamoate, nilutamide, thalidomide, CTX and ketoconazole failed to control the disease. In 2007 the patient's PSA level reached 9 ng/ml and new metastases were detected on bone scan. Despite that TAX at 25mg/m 2 was administered on weekly basis, the PSA level continued to increase, reaching 40.6 ng/ml (data not shown). Bevacizumab was added to the treatment and only then did the PSA drop significantly (data not shown). Throughout the cycles with chemotherapy the patient experienced significant side effects including fatigue, weakness, metallic taste, dizzi- www.impactaging.com ness, forgetfulness, short-term memory impairment and peripheral neuropathy ( Figure 4I). After discontinuing the chemotherapy, his PSA rose rapidly. TAX was resumed at 75mg/m 2 every 21 days, and was complemented with granulocytic colony stimulating factor (G-CSF). Once again the patient suffered significant side effects ( Figure 4I). In June 2008, chemotherapy was halted. The patient was enrolled in a phase III clinical trial with abiraterone acetate, a drug that can selectively block CYP17, a microsomal enzyme that catalyzes a series of reactions critical to nongonadal androgen biosynthesis [15]. During the trial, the patient's PSA levels increased to 20.9ng/dl ( Figure 4H), prompting resumption of chemotherapy and G-CSF. This time the patient opted to fast prior to chemotherapy. His fasting schedule consisted of 60 hours prior to and 24 post drug administration ( Table 2). Upon restarting chemotherapy with fasting the PSA level dropped, and notably, the patient reported considerably lower side effects than in previous cycles in which he consumed calories ad-lib ( Figure 4I). He also experienced reduced myelosuppression ( Figure 4A-G). During the last three cycles, in addition to fasting, the patient applied testosterone (cream, 1%) for five days prior to chemotherapy. As a consequence the PSA level along with the testosterone level increased dramatically. Nonetheless, 3 cycles of chemotherapy combined with fasting reduced PSA from 34.2 to 6.43 ng/ml ( Figure 4H). These results imply that the cytotoxic activity of TAX to cancer cells was not blocked by fasting. www.impactaging.com Case 4 This is a 61-year-old Caucasian female who was diagnosed in June 2008 with poorly differentiated nonsmall cell lung carcinoma (NSCLC). A staging PET scan documented a hypermetabolic lung mass, multiple mediastinal and left perihilar lymph nodes, and widespread metastatic disease to the bones, liver, spleen, and pancreas. The initial treatment commenced with the administration of TAX 75 mg/m 2 and CBDCA 540mg every 21 days. Although she was on a regular diet, during the first 5 cycles she lost an average of 4 pounds after each treatment, most likely due to chemotherapy-induced anorexia. The patient reported that she did return to her original weight but only after three weeks of the drug administration, just before a new cycle. Additional side effects included severe muscle spasms, peripheral neuropathy, significant fatigue, mucositis, easy bruising and bowel discomfort ( Figure 5H). During the sixth cycle, which consisted of the same drugs and dosages, the patient fasted for 48-hours-prior and 24-hours-post chemotherapy. She lost approximately 6 pounds during the fasting period, which she recovered within 10 days (data not shown). Besides mild fatigue and weakness, the patient did not complain of any other side effect which was experienced during the five previous cycles ( Figure 5H). Cumulative side effects such as peripheral neuropathy, hair loss and cognitive impairment were not reversed. By contrast self-reported acute toxic side effects were consistently reduced when chemotherapy was administered in association with fasting ( Figure 5H). In the sixth and last cycle, the patient reported that her strength returned more quickly after the chemotherapy so that she was able to walk 3 miles three days after the drug administration, whereas in previous cycles (ad libitum diet) she had experienced severe weakness and fatigue which limited any physical activity. No significant differences were observed in the patient's blood analysis ( Figure 5A-G). The last PET scan performed on February 2009 showed stable disease in the main mass (lungs) and decreased uptake in the spleen and liver when compared to her baseline study. Case 5 This is a 74 year-old woman diagnosed in 2008 with stage IV uterine papillary serous carcinoma. Surgery (Total Abdominal Hysterectomy-Bilateral Salpingo-Oopherectomy, TAH-BSO, with lymph node dissection) followed by adjuvant chemotherapy were recommended. Due to significant enlargement of the right ureter, a right nephrectomy was also performed. Post-operatively, six cycles of CBDCA (480mg) and paclitaxel (280mg) were administered every 21-days. During the first treatment the patient maintained her regular diet and experienced fatigue, weakness, hair loss, headache and gastrointestinal discomfort ( Figure 6). By contrast, during cycles 2-6, the patient fasted before and after chemotherapy, and reported a reduction in the severity of chemotherapy-associated side effects (Table 2; Figure 6). Fasting did not appear to interfere with chemotherapy efficacy, as indicated by the 87% reduction in the tumor marker CA-125 after the forth cycle (data not shown). www.impactaging.com Case 6 This is a 44-year-old Caucasian female diagnosed with a right ovarian mass (10x12 cm.) in July 2007. Surgery (TAH-BSO) revealed stage IA carcinosarcoma of the ovary with no lymph node involvement. Adjuvant treatment consisted of six cycles of ifosfamide and CDDP, administered from July to November of 2007. She remained free of disease until an MRI revealed multiple new pulmonary nodules in August 2008. Conse-quently chemotherapy with taxol, carboplatin and bevacizumab was initiated. By November, however, a CT scan showed progression of the cancer. Treatment was changed to gemcitabine plus TAX complemented with G-CSF (Neulasta) ( Table 2 and Table S2). After the first dose of gemcitabine (900 mg/m 2 ), the patient experienced prolonged neutropenia ( Figure 7A) and thrombocytopenia ( Figure 7D), which forced a delay of day 8 dosing. During the second cycle the patient received a reduced dose of gemcitabine (720 mg/m 2 ), but www.impactaging.com again developed prolonged neutropenia and thrombocytopenia, causing dose delays. For the third and subsequent cycles, the patient fasted for 62 hours prior to and 24 hours after chemotherapy. The patient not only did not find hardship on carrying out the fasting but also showed a faster recovery of her blood cell counts, allowing the completion of the chemotherapy regimen (gemcitabine 720mg/m 2 on day 1 plus gemcitabine 720mg/m 2 and TAX 80mg/m 2 on day 8). During the fifth cycle, she fasted under the same regimen and received a full dose of gemcitabine (900mg/m 2 ) and TAX (Table 2 and S2). Her complete blood count showed consistent improvement during the cycles in which chemotherapy was combined with fasting. A trend in which nadirs were slightly less pronounced and the zeniths were considerably higher in ANC, lymphocyte and WBC counts was observed ( Figure 7A, B, C, respectively; Table S2). During the first and second cycle (ad libitum diet) gemcitabine alone induced prolonged thrombocytopenia, which took 11 and 12 days to recover, respectively ( Figure 7D; Table S2) but following the first combined fasting-gemcitabine treatment (third and subsequent cycles), the duration of thrombocytopenia was significantly shorter ( Figure 7D; Table S2). www.impactaging.com Case 7 This is a 66-year-old Caucasian male who was diagnosed in July 1998 with prostate adenocarcinoma, Gleason score 8. A Prosta Scint study performed in the same year displayed positive uptake of the radiotracer in the right iliac nodes, consistent with stage D1 disease. The patient was treated with leuprolide, bicalutamide and finasteride. In December 2000, the diseases progressed. He started on a second cycle with leuprolide acetate and also received High Dose Rate (HDR) brachytherapy and external beam radiation with Intensity Modulated Radiation Therapy (IMRT) to the prostate and pelvis. In April 2008, a Combidex scan revealed a 3 x 5 cm pelvic mass and left hydronephrosis prompting initiation of TAX chemotherapy supplemented with G-CSF. The patient received 60-75 mg/m 2 of TAX for 8 cycles. Throughout this period the patient fasted for 60-66 prior to and 8-24 hours following chemotherapy ( Table 2). Side effects from fasting included grade one lightheadedness (accordingly CTCAE 3.0) and a drop in blood pressure, none of which interfered with his routine. Chemotherapy-associated self-reported side effects included grade one sensory neuropathy ( Figure 8I). The patient's ANC, WBC, platelet and lymphocyte levels remained in the normal range throughout treatment, although he did develop anemia ( Figure 8A-G). PSA levels consistently decreased, suggesting that fasting did not interfere with the therapeutic benefit of the chemo-treatment ( Figure 8H). Case 8 This is a 53-year-old Caucasian female who was diagnosed with stage IIA breast cancer (HER2+) in 2008. After a lumpectomy procedure, she received 4 cycles of adjuvant chemotherapy with TAX (75mg/m 2 ) and CTX (600mg/m 2 ) every 21 days. For all 4 cycles the patient fasted 64 hours prior to and 24 hours post chemotherapy administration (Table 2). Self-reported side effects included mild weakness and short-term memory impairment (Figure 9). www.impactaging.com Case 9 This is a 48 year-old Caucasian female diagnosed with breast cancer. Her adjuvant chemotherapy consisted of 4 cycles of doxorubicin (DXR, 110mg/dose) combined with CTX (1100mg/dose) followed by weekly paclitaxel and trastuzumab for 12 weeks. Prior to her first chemotherapy treatment, the patient fasted for 48 hours and reported no adverse effects. During the second and subsequent cycles the patient fasted for 60 hours prior to the chemotherapy followed by 5 hours post drug administration (Table 2). She reported no difficulties in completing the fasting. Although she experienced alopecia and mild weakness, the patient did not suffer from other commonly reported side effects associated with these chemotherapy drugs ( Figure 10). Case 10 This is a 78 year-old Caucasian female diagnosed with HER2 positive breast cancer. After mastectomy, six cycles of adjuvant chemotherapy were prescribed with CBDCA 400 mg (AUC= 6), TAX (75mg/m 2 ) complemented with G-CSF (Neulasta), followed by 6 months of trastuzumab (Table 2). Throughout the treatment the patient fasted prior and after chemotherapy administration. Although the patient adopted fasting regimens of variable length, no severe side effects were reported ( Figure 11H; Table 2). Her WBC, ANC, platelet and lymphocyte counts remained within normal levels ( Figure 11A-D) throughout the treatment, but she developed anemia ( Figure 11E-G). www.impactaging.com DISCUSSION Dietary recommendations during cancer treatment are based on the prevention or reversal of nutrient deficiencies to preserve lean body mass and minimize nutrition-related side effects, such as decreased appetite, nausea, taste changes, or bowel changes [16]. Consequently, for cancer patients who have been weakened by prior chemotherapy cycles or are emaciated, many oncologists could consider a fastingbased strategy to be potentially harmful. Nevertheless studies in cell culture and animal models indicate that fasting may actually reduce chemotherapy side effects by selectively protecting normal cells [9]. Following the publication of this pre-clinical work, several patients, diagnosed with a wide variety of cancers, elected to undertake fasting prior to chemotherapy and shared their experiences with us. In this heterogeneous group of men and women fasting was safely repeated in multiple cycles for up to 180 hours prior and/or following chemotherapy. Minor complaints that arose during fasting included dizziness, hunger, and headaches www.impactaging.com at a level that did not interfere with daily activities. Weight lost during fasting was rapidly recovered in most of the patients and did not lead to any detectable harm. We obtained self-reported assessments of toxicity from all 10 patients who incorporated fasting with their chemotherapy treatments. Since many of the chemotoxicities are cumulative, we evaluated serial data including all the combined fasting-and non-fasting (ad libitum diet) associated chemotherapy cycles ( Figure S1). Toxicity was graded utilizing a questionnaire based on the Common Terminology Criteria for Adverse Events of National Cancer Institute, version 3.0 (Table 1). Although the lack of prospective collection of toxicity data and grading are a significant limitation, this series provide an early insight into the feasibility and potential benefit of combining fasting with chemotherapy. Fewer and less severe chemotherapy-induced toxicity was reported by all the patients in combination with fasting, even though fasting cycles were often carried out in the later portion of the therapy ( Figure S1). Nausea, vomiting, diarrhea, abdominal cramps, and mucositis were virtually absent from the reports of all 10 patients in the cycles in which fasting was undertaken prior to and/or following chemotherapy; whereas at least one of these symptoms was reported by 5 out of the 6 patients during cycles in which they ate ad libitum ( Figure S1). The four patients that fasted throughout their treatments reported low severity for the majority of the side effects, in contrast to the typical experience of cancer patients receiving the same chemotherapy regimens ( Figures 8I, 9, 10, 11H). For the 6 patients who received chemotherapy with or without fasting, we compared the severity of the self-reported side effects in the 2 closest fasting/non-fasting (ad libitum diet) cycles in which the patient received the same chemotherapy drugs at the same dose. There was a general and substantial reduction in the self-reported side effects in combination with fasting ( Figure 1). Symptoms such as fatigue and weakness were reported to be significantly reduced (p< 0.001 and p< 0.00193, respectively), whereas vomiting and diarrhea were virtually absent in combination with fasting ( Figure 1). In addition, there was no side effect whose average severity was reported to be increased during fasting-chemotherapy cycles ( Figure 1 and Figure S1). Challenging conditions such as fasting or severe CR stimulate organisms to suppress growth and reproduction, and divert the energy towards cellular maintenance and repair to maximize the chance of survival [17,19]. In simple organisms such as yeast, resistance to oxidants and chemotherapy drugs can be increased by up to 10-fold in response to fasting/starvetion and up to 1,000-fold in those cells lacking homologs of Ras, AKT and S6 kinase [9]. Nevertheless, such protection and oxidative stress resistance is completely reversed by the expression of oncogene-like genes [9,18]. In mammals, the mechanism(s) responsible for the protective effect of fasting against chemotherapy induced-toxic side effects is not completely understood. It may involve reduction in anabolic and mitogenic hormones and growth factors such as insulin and insuline-like growth factor 1 (IGF-1) as well as up-regulation of several stress resistance proteins [20][21][22][23][24][25]. In fact, mice with liver specific IGF-I gene-deletion (LID) which have ~80% reduction of circulating IGF-I and mice with genetic disruptions in the IGF-I receptor (heterozygous knockout IGF-IR +/-) or its downstream elements have been shown to be more resistant against multiple chemotherapy agents and oxidative stress, respectively [26,27]. Alternatively, fasting-dependent DSR may be, in part, mediated by cell cycle arrest in normal cells whereas transformed cells continue to proliferate, remaining vulnerable to anticancer drugs [25,28]. Although mutations driving cancer progression are heterogeneous across tumor types, the majority of the oncogenic mutations render cancer cells independent of growth signals [28,29], which we hypothesize prevents cancer cells from responding to the fasting-induced switch to a protected mode [9]. Therefore, DSR would have the potential to be applied independently of the cancer type. Although this has not been yet demonstrated, the remarkable effects of fasting on the down-regulation of a number of growth factors and signal transduction pathways targeted by anti-cancer drugs, including IGF-I and the TOR/S6 kinase pathways, raises the possibility that it could enhance the efficacy of cancer treatment drugs and may even be as effective as some of them. In summary, in this small and heterogeneous group of cancer patients, fasting was well-tolerated and was associated with a self-reported reduction in multiple chemotherapy-induced side effects. Although bias could affect the estimation of the side effects by the patients, the case reports presented here are in agreement with the results obtained in animal studies and provide preliminary data indicating that fasting is feasible, safe and has the potential to differentially protect normal and cancer cells against chemotherapy in humans. Nevertheless, only a clinical trial, such as the randomized controlled clinical trial currently carried out at the USC Norris Cancer Center, can establish whether fasting protects normal cells and increases the therapeutic index of chemotherapies. METHODS From April 2008 to August 2009, 10 unrelated patients diagnosed with a variety of cancer volunteered to www.impactaging.com incorporate fasting with their chemo-treatments. We invited these patients to complete a self-assessment survey based on the Common Terminology Criteria for Adverse Events of The National Cancer Institute version 3.0. For the purpose of this study only, we developed a questionnaire that contained 16 easy identifiable and commonly reported side effects; the seriousness of the symptoms was graded from 0 to 4 with each consecutive number corresponding to no side effect/mild/moderate/severe and life threatening. Adverse effects were further divided into 3 major categories including, general, gastrointestinal and central/peripheral nervous system side effects, ( Table 1, original questionnaire). The survey was delivered to patients by mail, e mail or fax and every patient was instructed to complete it 7 days after each treatment cycle. Explanation and assistance to patient's concern were offered throughout the study. The eligibility criterion to participate was subjected to those patients that had voluntarily fasted prior and/or post chemotherapy. Medical records including basic demographical information, diagnosis, treatments, imaging studies and laboratory analysis were also retrospectively reviewed (Tables 2, 3). All the aforementioned procedures were in compliance with the Internal review Board of the University of Southern California (USC).
2014-10-01T00:00:00.000Z
2009-12-01T00:00:00.000
{ "year": 2009, "sha1": "b39c78e79c1ccc49c4874a34f786621051b13806", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.100114", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87e2785ad76e58b6deef491838951310af4bc0a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11035643
pes2o/s2orc
v3-fos-license
Add-on quetiapine in the treatment of major depressive disorder in elderly patients with cerebrovascular damage Background Depressive episodes in elderly patients with cerebrovascular damage are characterized by poor responses to standard antidepressants. Recent reports have suggested that the atypical antipsychotic, quetiapine may have antidepressant properties and, in mice, may prevents memory impairment and hippocampus neurodegeneration induced by global cerebral ischemia. Objective To evaluate the efficacy of combination therapy with quetiapine in depressed elderly patients with cerebrovascular damage. Methods An open-label, 6-month follow-up study of patients with major depressive disorder (DSM-IV) and cerebral abnormalities (assessed by MRI) without severe cognitive impairment. Patients who had not responded to standard antidepressants (months of treatment 6.5 ± 7.2) additionally received quetiapine (300 ± 111 mg/d). Patients were evaluated at baseline (t0) and Months 1, 3, and 6 (t1, t3, t6) using the Clinical Global Impressions Scale for Severity (CGI-S) and the Hamilton Depression Rating Scale (HAM-D). Results Nine patients were included in the study, with a mean age of 72.8 ± 9.4 years. CGI-S scores decreased from baseline to Month 6: 5.8 ± 0.7 (t0), 5.4 ± 0.7 (t1), 5.0 ± 0.8 (t3), and 4.5 ± 1.0 (t6), with a significant improvement at 6 months compared with baseline (P = 0.006). A significant improvement over the 6-month period was also observed with HAM-D scores (t0 = 27.2 ± 4.0, t6 = 14.8 ± 3.8, P < 0.001). Conclusion In this study, quetiapine was efficacious as combination therapy in depressed elderly patients with cerebrovascular damage. The promising results from this study warrant confirmation in large, randomized, double-blind, placebo-controlled studies. Introduction A serious and common risk to the elderly is depression, which, if untreated, is associated with a high rate of relapse, an increased likelihood of chronicity, and an ele-vated rate of mortality [1]. Affective disorders (such as depression) and vascular disease (including heart disease) are frequently comorbid conditions that share certain etiopathogenetic and prognostic factors. If untreated, depressive episodes may worsen the course of vascular disease (particularly cerebrovascular diseases) and compromise both quality of life and lifespan expectation. The close correlation between these comorbidities recently led to the identification of so-called "vascular depression" (Figure 1) [2]. Depressive episodes in elderly patients with cerebrovascular damage are characterized by low response rates to antidepressants, and it has therefore become increasingly important to investigate new treatments [3]. However, few therapeutic choices have been validated by strong clinical evidence. Quetiapine is an atypical antipsychotic approved for the treatment of schizophrenia and episodes of mania associated with bipolar disorder. A lot of studies have also described quetiapine monotherapy to be effective and well tolerated in unipolar [4] and bipolar depression [5]. Recently was reported that quetiapine prevents memory impairment and hippocampus neurodegeneration induced by global cerebral ischemia in mice [6] and preadministration of quetiapine significantly alleviated the depressive and anxiolytic-like behavioural changes induced by global general ischemia in mice [7]. Authors say that these results suggest a wider perspective for the clinical use of quetiapine. Nevertheless the US Food & Drug Administration (FDA) advises there may be an increased risk of mortality (mainly due to cardiovascular or infectious causes) in elderly patients with dementia-related psychosis treated with atypical antipsychotics. Objective To evaluate the effectiveness of quetiapine as add-on therapy in elderly patients with late-onset depression and cerebrovascular damage. Study design An open-label study of depressed elderly patients resistant to ongoing treatments with cerebrovascular damage who were observed for up to 6 months during add-on treatment with quetiapine. Clinical characteristics of vascular depression Figure 1 Clinical characteristics of vascular depression. Written consent for the study was obtained after giving patients a complete description of the study. Study medication Quetiapine was administered as add-on therapy with commonly prescribed antidepressants (paroxetine, citalopram, sertraline, mirtazapine). Quetiapine therapy was initiated at a minimum daily dose of 25 mg/d on Day 1 and was titrated up to 200 mg/ d on Day 7 according to the schedule shown in Table 1. After Day 7, the dosage was increased by 100 mg every 2 days until the optimal dose, based on individual response and tolerability, was reached. Efficacy assessments Efficacy was evaluated using the Clinical Global Impression-Severity scale (CGI-S) [11] and HAM-D rating scale [10]. Statistical methods Multivariate analysis of variance (MANOVA) was used to test for differences in mean CGI-S and HAM-D scores over time. Patient and treatment characteristics Nine patients (6 females, 3 males) who had not responded to standard antidepressants (mean [± SD] 6.5 ± 7.2 months of treatment) were included in the study. Patients had a mean age of 72.8 ± 6.4 years, a mean baseline CGI-S score of 5.8 ± 0.7, and a mean baseline HAM-D score of 27.2 ± 4.0. Antidepressants administered in combination with quetiapine are shown in Table 2. Other relevant medica-tions taken were benzodiazepines (6 patients) and gabapentin (1 patient). The mean quetiapine dose (SD) during the study was 300 ± 111 mg/day. Tolerability No patients discontinued the study. Side effects reported by patients during the period of addon quetiapine treatment were sedation (2 patients) and drowsiness (1 patient). Discussion and conclusion Add-on quetiapine therapy significantly improved depressive symptoms and was well tolerated in these elderly patients with comorbid depression and cerebrovascular damage who had previously failed to respond to standard antidepressants. Although limited by its open-label design and small sample size, this study demonstrates the efficacy and tolerability of quetiapine in this elderly patient population and is consistent with our previous findings [12] and other positive studies of quetiapine in Bipolar Depression [5], Major Depressive Disorder [4] and Generalized Anxiety Disorder [13]. The results of a survey in mice suggest that quetiapine may have defending effects on the impairments induced by cerebral ischemia [6]. Another study shows that quetiapine significantly attenuates bilateral common carotid artery occlusion induced spatial memory impairment and this improvement parallels the alleviative effects of quetiapine on bilateral common carotid artery occlusion induced neurodegeneration in the hilus of hippocampus [7]. Quetiapine may have a neuroprotective and neurogenetic role and this may be related to the therapeutic effects of quetiapine on cognitive deficits in patients with schizophrenia and depression, in which the structure and functions of the hippocampus are implicated [6,7]. The neuroprotective effect of quetiapine was recently confirmed in humans with bipolar disorders [14]. Quetiapine (with clozapine and risperidone) is the drug most commonly used for treat behavioural problems of dementia patients and do not seem to cause severe side effects according to published data, thus may have a possible role in vascular depression [15][16][17]. These results require confirmation from large, randomized, double-blind, placebo-controlled studies.
2014-10-01T00:00:00.000Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "d7092283b5e988cccf4b23024f327a1ccb1d44cb", "oa_license": "CCBY", "oa_url": "https://cpementalhealth.biomedcentral.com/track/pdf/10.1186/1745-0179-3-28", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d7092283b5e988cccf4b23024f327a1ccb1d44cb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
255727830
pes2o/s2orc
v3-fos-license
Amelioration of colitis progression by ginseng-derived exosome-like nanoparticles through suppression of inflammatory cytokines Background Damage to the healthy intestinal epithelial layer and regulation of the intestinal immune system, closely interrelated, are considered pivotal parts of the curative treatment for inflammatory bowel disease (IBD). Plant-based diets and phytochemicals can support the immune microenvironment in the intestinal epithelial barrier for a balanced immune system by improving the intestinal microecological balance and may have therapeutic potential in colitis. However, there have been only a few reports on the therapeutic potential of plant-derived exosome-like nanoparticles (PENs) and the underlying mechanism in colitis. This study aimed to assess the therapeutic effect of PENs from Panax ginseng, ginseng-derived exosome-like nanoparticles (GENs), in a mouse model of IBD, with a focus on the intestinal immune microenvironment. Method To evaluate the anti-inflammatory effect of GENs on acute colitis, we treated GENs in Caco2 and lipopolysaccharide (LPS) -induced RAW 264.7 macrophages and analyzed the gene expression of pro-inflammatory cytokines and anti-inflammatory cytokines such as TNF-α, IL-6, and IL-10 by real-time PCR (RT-PCR). Furthermore, we further examined bacterial DNA from feces and determined the alteration of gut microbiota composition in DSS-induced colitis mice after administration of GENs through 16S rRNA gene sequencing analysis. Result GENs with low toxicity showed a long-lasting intestinal retention effect for 48 h, which could lead to effective suppression of pro-inflammatory cytokines such as TNF-α and IL-6 production through inhibition of NF-κB in DSS-induced colitis. As a result, it showed longer colon length and suppressed thickening of the colon wall in the mice treated with GENs. Due to the amelioration of the progression of DSS-induced colitis with GENs treatment, the prolonged survival rate was observed for 17 days compared to 9 days in the PBS-treated group. In the gut microbiota analysis, the ratio of Firmicutes/Bacteroidota was decreased, which means GENs have therapeutic effectiveness against IBD. Ingesting GENs would be expected to slow colitis progression, strengthen the gut microbiota, and maintain gut homeostasis by preventing bacterial dysbiosis. Conclusion GENs have a therapeutic effect on colitis through modulation of the intestinal microbiota and immune microenvironment. GENs not only ameliorate the inflammation in the damaged intestine by downregulating pro-inflammatory cytokines but also help balance the microbiota on the intestinal barrier and thereby improve the digestive system. Introduction Inflammatory bowel disease (IBD), including Crohn's disease and ulcerative colitis, is a group of severe diseases that accompany chronic immune response colitis.According to the data from the Centers for Disease Control and Prevention (CDC) in 2015, approximately three million patients reported being diagnosed with IBD in the United States.Although in recent years people have developed more knowledge about health care and try to take care of themselves, bad eating habits and the burden of stress and irregular sleep patterns in modern society cause immune system derangement, which might affect the intestinal environment [1,2].There are conventional drugs that have been approved to ameliorate IBD such as 5-aminosalicylate (5-ASA), steroids, immunosuppressants, and anti-tumor necrosis factor alpha (TNF-a) drugs [3,4].However, severe adverse effects of these drugs such as toxicity and drug resistance have been reported; therefore, there is a limit to long-term usage in clinical application [5e8].Thus, there is an urgent need to identify new therapeutic strategies with new mechanisms to treat IBD. Edible plants and dietary phytochemicals have been reported to alleviate inflammatory gut conditions and alter the gut microbiota by enriching the gut microenvironment with antioxidant compounds and bioactive components, such as phenolics [9].Consumption of edible plants rich in antioxidants can suppress pathogeneses of various diseases.However, the bioactive compounds in ingested plants are usually metabolized after these biomolecules are absorbed in the gastrointestinal tract into the body.The resulting plasma metabolites generally have short retention times in the body and lower bioactivity than the precursor biomolecules.Thus, metabolites of ingested bioactive compounds can lower their anti-oxidant and anti-inflammatory activities. Recently, the therapeutic application potential of plant-derived exosome-like nanoparticles (PENs) has received much attention due to their biocompatibility, low toxicity at a wide range of dosages, and significant bioactivity.Furthermore, the high stability of PENs enables their oral administration to be effective.Deng et al have reported that orally administered broccoli-derived exosomelike nanoparticles are effective against acute and chronic IBDs and show preventive and therapeutic effects by upregulating antiinflammatory cytokines through activation of AMP-activated protein kinase [10].Teng et al have demonstrated that ginger-derived exosome-like nanoparticles containing bioactive miRNAs are taken up by the gut microbiota [11].The lipid constituents of these PENs can sufficiently upregulate IL-22, which modulates the intestinal barrier, composition of the gut microbiota, and host physiology.However, whether PENs containing bioactive molecules can provide health benefits in colitis, directly induce anti-inflammatory activity in immune cells, and influence the intestinal microenvironment in pathological conditions have not been fully understood. Ginseng (Panax ginseng), one of the most valuable medicinal herbs, has been frequently used to regulate inflammatory diseases.Ginseng contains many bioactive ingredients such as polysaccharides, polyacetylenes, and ginsenosides [12].Among these components, ginsenosides are considered the main bioactive ingredients of ginseng and are known to be useful in the treatment of inflammatory diseases.Some specific ginsenosides, such as Rg3, Rb1, and Rk, are known to have a strong influence in the antiinflammatory cascade through downregulation of the activity of inflammatory signaling pathways such as the nuclear factor-kB (NF-kB), activator protein-1, and p38 MAPK signaling pathways [13].Among the bioactive molecules in ginseng, protopanaxadiol (PPD) ginsenosides such as Rg3, Rb1, and Rh2 have been reported to have the benefit of attenuating colitis and reducing symptoms such as diarrhea and additional inflammation in the surrounding tissues by suppressing increased inflammatory cytokines [14e17].However, treatment of IBD with ginsenosides may be problematic due to the instability of ginsenosides under physical conditions such as varying temperatures and in liquid forms and the transformation or conversion of ginsenosides after oral administration. As Panax ginseng is known to have valuable medicinal properties in inflammation, PENs from Panax ginseng may be expected to possess therapeutic potentials.Ginseng-derived exosome-like nanoparticles (GENs) are known to be biocompatible and stable and have a low immunological risk.Due to these characteristics of GENs, the effects of GENs in treating cancer and neural disease have been researched [18e22].However, there is no evidence of the role of orally administered GENs in inflammatory diseases such as IBD; therefore, we evaluated the influence of GENs on IBD and their ability to regulate pro-inflammatory cytokines and alter gut microbiota components.GENs can help reduce the levels of inflammatory cytokines such as TNF-a and interleukin 6 (IL-6) on inflammatory cells through downregulation of NF-kB expression.Furthermore, we demonstrated that GENs containing various biochemical compounds could support the intestinal microenvironment with an increased level of probiotics such as Lactobacillus.Thus, GENs could be used as a promising therapeutic agent for IBD.The schematic illustration shows the influence of GENs on IBD (Scheme 1) [23]. DLS measurement of GENs The size and surface charge were measured via dynamic light scattering (DLS) using a Zetasizer Nano ZS (Malvern Instrument, UK).The GENs were diluted 1000 times with DEPC water.The diluted GENs were added in a cuvette (12 mm square polystyrene cuvette, DTS0012, Malvern, UK) and in a disposable cuvette for zeta potential measurement (disposable capillary cell, DTS1070, Malvern, UK).The measurement was conducted at 37 C, and all experiments were run in triplicate. Measurement of of protein concentration of GENs To quantify GENs, the protein concentration of GENs was determined by Bradford protein assay (Yeasen, China).The fresh GENs were diluted for 10 times with 1X PBS and 5 mL of the diluted GENs was used for measurement of the protein concentration.Bradford assay was performed as described in the previous report [24]. Analysis of cellular uptake of GENs by flow cytometry To determine cellular uptake of GENs, Caco2 and RAW 264.7 cells (2 Â 10 5 cells/well) were seeded in 12 well.The cells were incubated for 24 h and subsequently 20 mL of DiD-labeled GENs was injected in the cells.The cells were incubated at 37 C for 6h.Then, the cells were collected by trypsin treatment for detection of the internalized fluorescence of GENs.The result was organized by FlowJo software (BD biosciences, USA). Mice and ethical approval of animal experiment 18-20 g of 6-8 week old Balb/C male mice were purchased from Sino-British SIPPR/BK Lab.Animal Co., Ltd.(Shanghai, China).All animal experiments were performed at Fudan university are scientific and ethical in accordance with the Guiding Principles for the Care and Use of Experimental Animals with ethical approval number of 2021-07-YJ-JZS-90 (Shanghai, China).The mice were gavaged with 2.5% (W/V) DSS water under standard conditions.On day 7 post-gavage, the mice were randomly divided into groups (N ¼ 6) and orally administrated with 1 mL of 1 mg/mL of GENs (based on the protein concentration analyzed by Bradford assay) every day for 7 times.On day 15 post-gavage, the mice were anesthetized and humanly sacrificed by cervical dislocation and the large intestine was extracted for subsequent analysis. Characterization of GENs The diameter, size, and zeta potential of GENs were measured using dynamic light scattering (DLS).The average GEN size was 146.5 nm with a low polydispersity index (PDI) range, and the zeta potential of GENs was a negatively charged value of À19.2 mV. (Fig. 1A and B) The average size distribution and concentration of GENs were analyzed by nanoparticle tracking analysis (NTA).(Fig. 1C) The average number of nanoparticles was 6.93 Â 10 12 particles.Data were collected in triplicate.To determine the morphology of GENs, GENs were visualized by transmission electron microscopy (TEM) and Cryo-TEM.According to TEM and Cryo-TEM images, GENs were homogenous particles within cup-shaped vesicles with diameters of approximately above 100 nm.(Fig. 1D) To quantify GENs, the protein concentration of GENs was measured using a Bradford assay. 1 mL of fresh GENs was obtained per 10 g of ginseng, and a concentration of 1.57 ± 0.32 mg/mL of fresh GENs was measured.(Fig. 1E) Suppression of M1-like macrophage polarization and increased M2-like macrophage polarization An inappropriate immune response might indicate a poor prognosis in IBD.To assess whether GENs can modulate immune reactions and influence the macrophages in the intestinal microenvironment, lipopolysaccharide (LPS)-induced RAW 264.7 macrophages were used as an in vitro model.M1 macrophages can induce inflammatory responses by secreting pro-inflammatory cytokines into the tissue to be inflamed.Thus, to evaluate whether GENs have anti-inflammatory effects on M1 macrophages, we stimulated the polarization of RAW 264.7 macrophages into these pro-inflammatory macrophages via LPS treatment (0.1 mg/ mL) [25e27], followed by GENs treatment (1e50 mg/mL for 24 h).Interleukin (IL)-6 and IL-10 were used as phenotypic markers for M1 and M2 macrophages, respectively.GENs significantly downregulated IL-6 in the M1-polarized RAW 264.7 cells in a concentration-dependent manner (Fig. 2A).In addition, the effect of GENs on the polarization of RAW 264.7 cells was analyzed via flow cytometry (Fig. 2B and C).The results showed that the M2 macrophage marker CD 206 was significantly upregulated after the cells were treated with GENs.Furthermore, we evaluated the expression levels of pro-and anti-inflammatory cytokines in these cells by RT-qPCR (Fig. 2DeF).The results indicated that the levels of the pro-inflammatory cytokines TNF-a and IL-6 were significantly decreased in the cells treated with GENs compared with the levels in the negative-control group, which is cells treated with 1X PBS, while treatment of the cells with 1 mM and 3 mM 5-ASA did not significantly affect the levels of any of these anti-inflammatory cytokines.Conversely, the anti-inflammatory cytokine IL-10 was significantly upregulated by GENs. Cellular uptake of GENs in vitro Efficient cellular delivery of GENs to cells could yield an improved therapeutic effect.Thus, we used DiD-labeled GENs and observed their cellular uptake.As shown in Fig. 2G and H, in Caco2 and RAW 264.7 cells, the cellular uptake of GENs gradually increased for up to 6 h.The internalization efficiency of the DiDlabeled GENs in Caco2 and RAW 264.7 was 56.2% ± 1.91 and 98.95% ± 0.25, respectively.Furthermore, using confocal microscopy, Caco2 cells incubated with DiD-labeled GENs were examined.(Fig. 2I) The results indicated that GENs could be efficiently taken up by Caco2 and RAW 264.7 cell lines at human body temperature (37 C). Stability of GENs in vitro digestion To confirm the stability of GENs under physiological conditions, we incubated GENs in different pH solutions similar to the human digestive system at 37 C and observed at 0, 2, and 4 h.(Fig. S1) Fig. S1 shows the change in the size of GENs in pH 2, pH 7.5, and pH 7 solution.At 0 h, in pH 2 solution, which mimics gastric pepsin solution, the surface size of GENs remained almost unchanged compared with GENs in saline solution (pH 7) at all-time points.In pH 7.5 solution, which mimics bile extract solution, the size of GENs dramatically changed compared with GENs in saline solution (pH 7) over time.The heterogeneity of GENs also gradually increased in all groups at all-time points.(Fig. S1c) As a result, there was no significant difference in GENs in pH 2 solution for 4 h, which indicated that GENs could be very stable under varying pH conditions.However, enlarged particles were observed in pH 7.5 solution at 4 h.It was difficult to tell whether GENs aggregated or individual GENs swelled up in basic solution.Collectively, GENs are highly resistant to the digestive system in both gastric pepsin solution and bile extract solution.The properties of GENs may be pH-dependent specifically at high pH values. Suppression of inflammation by inhibition of the NF-kB To explore if GENs have an anti-inflammatory effect in inflammatory disease, 5 mg/mL and 10 mg/mL GENs were treated in Caco2 cells exposed to bacterial-derived LPS, and inflammatory genes were analyzed.(Fig. S2) 5-ASA, which is the active compound that is a highly effective treatment for IBD was also used as a control group.The mRNA expression of NF-kB, cytokines, and chemokines including TNF-a, IL-1b, IL-8, and iNOS was evaluated by RT-PCR.As expected, after treatment with GENs of both concentrations, the expression levels of NF-kB, TNF-a, IL-1b, and IL-8 mRNA were statistically decreased in Caco2 cells.Specifically, the mRNA expression levels of IL-8, TNF-a, and IL-1b were significantly decreased as much as those in the 5-ASA-treated groups.With regard to the gene expression of iNOS, the GENs-treated groups exhibited decreased iNOS expression, but this was statistically detectable only at the highest concentration of GENs and both of the 5-ASA-treated groups. According to previous reports about the anti-inflammatory effects of Panax ginseng [28e31], Panax ginseng contains various bioactive constituents such as ginsenosides.Tung et al [32] evaluated some of the compounds in ginseng that have antiinflammatory effects and anti-oxidant activity.Furthermore, Ahn et al [33] analyzed NF-kB signaling pathway suppression induced by the ginsenoside Rf in macrophages.Ginseng and its bioactive components regulated cytokines and chemokines, suggesting suppression of the NF-kB signaling pathway.In light of these studies, GENs may contain various bioactive components that originate from Panax ginseng; thus, GENs could have a potential benefit as anti-inflammatories.The RT-qPCR results indicated that GENs attenuate inflammation by downregulating the proinflammatory mediator NF-kB. Protection against DSS-induced colitis by oral gavage with GENs We used a DSS-induced acute colitis model to explore the biological effects of treatment with an oral gavage of GENs on IBD.To confirm whether GENs can exert a protective effect against DSSinduced colitis, 0.02 g of freeze-dried GENs was dissolved in 200 mL of 1X PBS, and GENs were orally administered to Balb/C mice daily for 7 days before gavage with 2.5% DSS.At day 8 of oral administration of GENs, 2.5% DSS gavage was started via drinking water.(Fig. 3) When 2.5% DSS was consistently administered, there was an obviously lower reduction of body weight in the mice treated with GENs compared with the control mice treated with PBS.Furthermore, the body weights of the control mice treated with PBS rapidly decreased at day 13.(Fig. 3A) The results indicated that GENs significantly inhibited DSS-induced colitis progression as evidenced by the fact that the body weight was only slightly reduced in the mice treated with GENs.We compared the lengths of the intestine exerted from the mice treated with 1X PBS and GENs at day 15, and the colon lengths of the mice treated with PBS were noticeably shorter than those of mice in the control group and mice treated with GENs.(Fig. 3B and C) After treatment with GENs, the intestines of the mice were collected, and the total RNA was extracted for further analysis.Then, the relative mRNA expression of genes involved in the inflammatory response was analyzed.According to the results, GENs can selectively inhibit TNF-a, IL-17A, and IL-6, while programmed death-ligand 1 (PD-L1) and IL-10 were upregulated.(Fig. 3D) However, unexpectedly, the mRNA expression of IFN-g and TGF-b did not show statistically significant differences after treatment with GENs. Therapeutic effect of orally administered GENs in vivo To determine the effect of GENs on the development of DSSinduced colitis, we administered GENs (0.02g of freeze-dried GENs dissolved in 200mL of 1X PBS) via oral gavage to mice DSSinduced colitis daily for 7 days.As shown in Fig. 4, the administration of GENs in mice with DSS-induced colitis resulted in a reduction in colon shortening compared with that of the mice in the PBS group.(Fig. 4A and B) Subsequently, the colon thicknesses of the mice were observed using a CT scan.The colonic wall thicknesses of the mice treated with GENs were significantly reduced at 0.6 ± 0.03 mm compared with that of the DSS-PBStreated group at 1.1 ± 0.32 mm.(Fig. 4C) With regard to the body weights of the mice, there was no significant change in body weights of mice in the control group, and the mice treated with GENs had significantly reduced weight loss on day 9, whereas the mice gavaged with 2.5% DSS and PBS demonstrated weight loss.(N ¼ 6) (Fig. 4D) To assess the symptoms of colitis, the signs of DSS-induced colitis were evaluated using the disease activity index (DAI).The DAI score is used to determine the severity of DSS-induced colitis, and can be calculated with three parameters, which are body weight loss, stool consistency, and rectal bleeding [34].We evaluated the severity of the DSS-induced colitis and disease progression in mice using the DAI assessment.Over time, we observed significantly increased DAI scores of up to 6 for the PBS group after the first day of the experiment, whereas the DAI scores for the GENs treatment group were 2 points.Although the scores in the GEN treatment group tended to increase from 0 to 2 points, the symptoms were significantly relieved.(Fig. 4E) As expected, due to improvement in the progression of DSS-induced colitis with GENs treatment, the long-term survival rate was 17 days in the GENs treatment group compared with 9 days in the PBS-treated group.(Fig. 4F) Accompanied by the injured colonic tissue and improved colonic tissue, a quantitative analysis of gene expression was performed.GENs treatment resulted in the downregulation of proinflammatory related cytokines including TNF-a, interleukin 17A (IL-17A), and IL-6, while the gene expression levels of PDL-1 and IL-10 statistically increased.(Fig. 4G) The mRNA expression levels of IFN-g and TGF-b slightly decreased, but this change was not statistically significant.Assuming that inflammatory progression was ameliorated and that the damaged tissues were normalized by administration of GENs, GENs could aid in the downregulation of NF-kB responses and activation of pro-inflammatory macrophages in the affected colon.Thus, mice treated with DSS and PBS were especially vulnerable to damage to the intestinal mucosa and showed severe intestinal inflammatory progression during all experimental days, while those in the GENs treatment group exhibited slightly increased DAIs, and the inflammatory response was ameliorated. Histopathological findings of DSS-induced colitis To determine the severity of colitis, hematoxylin and eosin (H&E)-stained colonic tissues were analyzed.(Fig. 4H) The results of H&E-stained colonic tissues from mice treated with 2.5% DSS showed histological and pathological changes.The colonic tissues of mice with 2.5% DSS-induced colitis were consistent with gross lesions such as crypt loss or damage, destroyed mucosa, and inflammatory cell infiltration.In the presence of GENs in colitis, the colonic tissue showed comparatively little crypt damage and only mild degeneration of the crypts and mucosa.There were fewer histological changes in the colonic tissue of mice treated with GENs than in the colonic tissue of mice in the DSS and PBS treatment groups. Liver toxicity of GENs in vivo To evaluate the liver toxicity of GENs in mice treated with GENs, the serum aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels were measured after 7 days of oral administration of GENs.(Fig. 4I) In mice administered 1X PBS (control group) and GENs, there was no significant difference in the serum AST and ALT levels between the control and GEN-treated groups.Our data showed minimal hepatic toxicity of regularly orally administered GENs. Accumulation of orally administered GENs To our knowledge, no study has reported whether orally ingested GENs can reach the intestine without degradation or how long they can accumulate there.Thus, we GENs with a lipophilic carbocyanine dye, DiD (DiIC18(5); 1,1 0 -dioctadecyl-3,3,3 0 ,3 0 -tetramethylindodicarbocyanine, 4-chlorobenzenesulfonate salt), and administered a single dose of oral DiD-labeled GENs to the mice.The results indicated that high amounts of DiD-labeled GENs were distributed in the intestines at 30 h and lasted for 48 h with slight intensity of DiD-labeled GENs, whereas the intensity of free DiD was completely excreted at 30 h after administration.(Fig. S3A, B) The mean ratio of the ROI of DiD-labeled GENs and free DiD in the intestines was significantly increased by approximately 3.6 times at 24 h after administration.(Fig. S3C) This finding indicates that DiD-labeled GENs are physically and chemically tolerable in gastric transit, and may facilitate sustained delivery of GENs at the site of the intestine in GI tract compared with free DiD, suggesting that GENs can be delivered throughout the large intestine and that the medicinal effects of GENs might last for a long time in vivo.GENs typically consist of lipids; thus, highly stable GENs could provide a strategy for long-term accumulation in the intestines without early elimination even under various pH conditions.To avoid the limitation of orally administered drug for colon-targeted delivery, GENs can replace edible plants, wild phytochemicals, or unstable chemical drugs, while including the advantages and inducing maximum therapeutic outcomes. Alteration of the gut microbiota induced by GENs According to previous articles, ginseng affects the structure of gut microbiota [12].Among various biochemical metabolites in ginseng, ginsenosides such as Rg3 and Rb1 exhibit particularly potent pharmacological effects on the gut microenvironment and can enhance beneficial microbiota under pathological conditions.Based on these findings, it is expected that GENs containing various metabolites could promote the growth of probiotics.Thus, we investigated whether administration of GENs could alter the abundance of the microbiome and gut microbiota to improve intestinal immune homeostasis in mice. As displayed in Fig. 5, the phylum profiles of the microbiome showed that the relative abundance of the gut microbiome in the mice gavaged with 2.5% DSS changed after administration of GENs.The number of species in the intestinal microbial community was determined by the Chao and Shannon index.The diversity and community abundance in the mice treated with GENs seemed to improve slightly compared with mice in the DSS control group.(Fig. 5A) The species composition of the microbiome appeared different between the relative abundance in the the DSS control group and GEN treatment group.(Fig. 5B and C) The phylum profiles in the GEN treatment group exhibited significant induction of Bacteroidota and reduction in Firmicutes.(Fig. 5D) A microbial imbalance was observed in the DSS control group at the phylum level by the major bacterial phyla including Firmicutes and Bacteroidota.An increased level of Firmicutes is found in most IBD cases.There is considerable focus on the Firmicutes/Bacteroidota ratio because it is affected by increased dysbiosis.The decreased ratio of Firmicutes/Bacteroidota in the GEN treatment group indicates the therapeutic effectiveness against IBD.(Fig. 5E) Maintaining normal intestinal function, homeostasis, and therapeutic strategies are intended to achieve an appropriate balance between Firmicutes and Bacteroidota [35].The results indicate that the intestinal microenvironment was normalized and stabilized with the treatment of GENs against IBD. The gut microbiota community members were evaluated.(Fig. S4) An increased population of Helicobacter is known to affect pathogenesis in IBD by triggering an autoimmune reaction, and an increased abundance of Oscillibacter indicates increased severity of enhanced colitis.GENs administration clearly decreased the harmful bacteria in the intestines [36].Recently, an increased level of Ruminococcus, a prevalent gut microbe, has been linked to aggravated symptoms of IBD by inducing pro-inflammatory complex polysaccharides [37], and the population of Ruminococcus was found to be increased in the DSS control group, while the relative abundance of Ruminococcus was statistically decreased in the GEN treatment group. According to previous reports, the probiotic Lactobacillus is known to have a preventive effect against colitis [38].Some probiotics may have a particular relevance to the cellular immune response to intestinal microorganisms because they can help to balance relapsing intestinal conditions by shifting from a T helper cell 1 (Th1)-mediated immune response toward a Th2/Th3promoting profile.As expected, GENs treatment significantly increased the relative abundance of Lactobacillus, suggesting that GENs notably affected Lactobacillus generation compared with the DSS control group. Lachnospiraceae, one of the core families of the gut microbiota, is generally detected in the human intestine.Lachnospiraceae are related to producing short-chain fatty acids (SCFAs) that are crucial for maintaining intestinal homeostasis through the activation of regulatory T cells.Unexpectedly, the abundance of Lachnospiraceae was slightly decreased in the GENs treatment group compared with the DSS control group. Desulfovibrio and Odoribacter are probiotics that protect against colitis by enhancing host immunity and maintaining intestinal integrity.However, the abundances of Desulfovibrio and Odoribacter were slightly increased in the DSS control group, but the difference was not statistically significant. The increased abundance of Alistipes plays a vital role in the protection of gut barrier function and attenuation of DSS-induced colitis [39,40].In the GENs treatment group, the abundance of Alistipes was slightly higher than that in the control group. Thus, the amount of beneficial bacteria including Lactobacillus and Alistipes was significantly increased in the GENs treatment group, while the abundance of harmful bacteria such as Helicobacter and Odoribacter was decreased. Discussion PENs have received increasing attention in regenerative medicine due to their many clinical advantages, such as relatively few safety concerns, good biocompatibility, high stability, and significant therapeutic activity.PENs can be produced with high yield and high purity, and thus they have been widely studied in various pathological conditions [10,11,41,42].It is well known that PENs participate in cell-to-cell communications, thereby inducing multiple therapeutic benefits in inflammatory diseases [43e45].Reports show that PENs can target specific cells and modulate the mammalian innate immune system without any significant toxicity in vivo [41,42]. GENs contain various bioactive biomolecules with pharmacological properties, such as anti-oxidant and anti-inflammation effects, and thus may be applied to the treatment of inflammatory diseases [45].However, the studies on GENs are still limited, and mechanisms underlying various diseases and the regulatory effects of GENs on pro-inflammatory macrophages in DSS-induced colitis have not been elucidated yet.For the morphological characterization of GENs, we used DLS, TEM, and Cryo-TEM.Our purified sample of GENs consisted of homogenously distributed, cupshaped and spherical GENs.The pH stability of GENs was determined based on particle size and distribution in a simulated gastrointestinal solution.We expect that orally administered GENs can be stably transmitted through the gastrointestinal tract and properly absorbed by the body.In addition, we observed that orally administered DiD-labeled GENs rapidly reached the intestine, and the fluorescence intensity of GENs was approximately ~3.6-fold of the control (free-DiD).This observation indicates that GENs can tolerate the intestinal environment and can accumulate in the colonic mucosal barrier. Although Panax ginseng or its bioactive metabolites have been shown to be effective against many inflammatory diseases, whether GENs can have any effect on colitis and modulate the gut microbiome has not been assessed.Aberrant NF-kB activation has been found in inflamed intestine, thus there are several pharmacological approaches to inhibit the activation of NF-kB signaling pathway [46,47].From a therapeutic application aspect, we uncovered that GENs suppress the activation of NF-kB under inflammatory conditions and thereby exert anti-inflammatory activity.(Fig.S2) [48,49] Thus it can infer the sequential signal interactions in M2 macrophage proliferation [50].We also observed that GENs downregulate pro-inflammatory cytokines and promote M2 polarization of macrophages.Additionally, we observed that GENs upregulate the anti-inflammatory cytokine IL-10 in mice.Therefore, we concluded that GENs could elicit anti-inflammatory activities in the immune microenvironment.Furthermore, GENsmediated suppression of NF-kB needs to be investigated.However, according to previous reports, the role of IL-10 in intestinal inflammation is controversial.The secretion of IL-10 represses the expression of pro-inflammatory cytokines and thereby contributes to restoring the immune balance [51,52].However, as IL-10 and IL-6 are involved in various diseases, it has been proposed that modulation of the cytokine levels within the microenvironment can improve the immune imbalance.Assuming that, GENs and the bioactive molecules can enhance the polarization of M2 macrophages and may also promote the production of IL-10. Disruption of the gut microbiota composition promotes inflammatory processes associated with upregulation of proinflammatory cytokines [53e55].GENs reinforce a balanced immune response in colitis, and our gut microbial-profile studies demonstrated that GENs can alter gut bacterial composition of gut phyla and affect gut homeostasis.Thus, this study implied that GENs could elicit improvement of intestinal environment by an increase in beneficial bacteria such as Bacteroidota.The stably remained probiotics in the intestine contribute to the balance of immune response in gut microbiota associated diseases.Increase of probiotics could provide the health effect to the host within the gut [56].Imbalance in gut bacteria composition in DSS-induced colitis is accompanied with intestinal symptoms.Taken together, we speculate that the oral delivery of GENs to the intestine with longer retention time can coordinate the cross-talk between host cells and the gut microbiota, help slow the progression of DSS-induced colitis, and ameliorate symptoms with strengthening the colonic mucosal barrier.This strategy could provide an alternate approach for edible plants or wild phytochemicals in DSS-induced colitis.However, the exact mechanisms underlying these effects are elusive. Oral administration of GENs for a therapeutic application is challenging.The relationship between gut microbiota and host immunity is complex, and the mechanisms whereby GENs support gut immune homeostasis and modulate gut dysbiosis in IBDs should be clarified.Overall, this study describes the benefits of GENs on the innate immune system and gut microbiota and suggests that GENs can serve as therapeutic agents in DSS-induced colitis. Conclusion Although there have been some reports of GENs being involved in modulation of macrophages in the tumor microenvironment and suppression of tumor progression, the function of GENs in inhibiting inflammation is poorly understood.In this study, we found that GENs are very stable in mimicking enzyme solutions and efficiently taken up by immune cells and then induce therapeutic activity in DSS-induced colitis.GENs could prevent and inhibit the inflammatory reaction by inducing downregulation of proinflammatory cytokines in pro-inflammatory macrophages, while inducing the anti-inflammatory macrophage expression.Through inhibition of NF-kB.Furthermore, administration of GENs led to modulation of the composition of the gut microbiome and gut microbiota in the inflamed intestinal environment, suggesting that GENs could aid in treatment by altering the regular course of these pathological intestinal conditions.Ingesting GENs would be expected to slow colitis progression, strengthen the gut microbiota, and maintain gut homeostasis by preventing bacterial dysbiosis. Declaration of competing interest The authors declare no conflict of interest. Fig. 1 . Fig. 1.Characterization of ginseng derived exosome-like nanoparticles (GENs).(A) (B) The measurement of size and surface charge of GENs by dynamic light scattering (DLS) analysis.(C) Counting of the particles of GENs by nano tracking analysis (NTA).(D) TEM (left) and Cryo-TEM (right) imaging of GENs.(E) Quantification of Protein concentration of GENs by Bradford assay. Fig. 2 . Fig. 2. Suppression of M1 macrophage polarization, pro-inflammatory cytokines and induction of anti-inflammatory cytokine in M1 polarization and in vitro uptake of GENs by Caco2 and RAW 264.7 cells.(A) RAW 264.7 cells were used and proliferated into M1 macrophage with 0.1 mg/mL of LPS.After treatment of GENs from 1 to 50 mg/mL, the gene expression of IL-6 was analyzed.(B) Cell population of F4/80 þ CD206 þ induced by treatment of GENs.(C) M1 and M2 macrophage subtypes induced by GENs.(D,E,F) In M1 macrophage, the gene expression of proinflammatory cytokines including TNF-a and IL-6 and anti-inflammatory cytokine were analyzed.(G) (H) The representative histograms and cellular uptake efficiency of DiD-labeled GENs were analyzed using flow cytometry analysis after 1, 3, 6 hr of incubation.(I) Caco2 cell lines were incubated with DiD-labeled GENs for 6 hr and the image was observed by confocal microscopy.(Blue: Hoechst, Red: DiD-labeled GENs). Scheme 1 . Scheme 1. Dextran sulfate sodium (DSS)-Induced colitis in vivo.(A) Administration of 2.5% (w/v) DSS in drinking water and the inflammatory responses in macrophages.(B) After administration of 2.5% DSS, the intestinal environment was greatly altered and disturbed with translocation of the bacterial community through the damaged intestinal mucosa structure. Fig. 3 . Fig. 3. Administration of GENs protects DSS-induced colitis.Balb/C mouse were orally treated with 0.02g of freeze-dried GENs dissolved in 200 mL of 1X PBS and PBS every day for 7 days before administration of 2.5% DSS in water.At day 8 of oral gavage of GENs, 2.5% DSS gavage was started via drinking water.(A) The weight loss of mouse treated with GENs and PBS after administration of 2.5% DSS in water.(B) (C) After administration of GENs every day for 7 days, the mouse were sacrificed and the colon length were measured.(D) mRNA gene expression of inflammatory genes were detected by RT-qPCR.(*P < 0.05, **P < 0.01, ***P < 0.001). Fig. 4 . Fig. 4. Therapeutic effect of GENs in DSS-induced colitis.(A) (B) Measurement of colon length.(C) Colon thickness imaged by CT scan.(D) The weight loss of DSS-induced colitis mice treated with 1X PBS and GENs.(E) The Disease Activity Index (DAI) assessment.DAI index was evaluated with the three parameters (body weight loss, stool consistency, rectal bleeding).(F) Survival rate of Balb/C mice gavaged with 2.5% DSS treated with PBS and GENs.(G) Gene expression of inflammatory genes was analyzed.(H) Representative H&E stained colonic tissue from Balb/C mice with or without 2.5% DSS gavage at day 8 after treatment of GENs.(I) Evaluation of serum AST and ALT levels after administration of GENs (*P < 0.05). Fig. 5 . Fig. 5.The recovery effect on intestinal microbiome of treatment of GENs.(A) Chao and Shannon index of DSS control group and GENs treatment group.(B) Heatmap representative of the relative abundance of microbiome at the phylum level in each group.(C) The phylum profiles in gut microbiome (D) The pie chart of microbiome at the phylum level.(E) The ratio of Firmicutes/Bacteroidota in each group.(*P < 0.05).
2023-01-12T16:58:44.963Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "45dcf1b9d0f74443e4dc04279de3716281f2fa70", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jgr.2023.01.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62a505063438c4e3eda6eb1a47207d771dbd08a1", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
90284009
pes2o/s2orc
v3-fos-license
The Impact of Vitamin K2 on Energy Metabolism Environmental and behavioral adaptations introduced during the last decades have synergistically enhanced man’s lifespan, but also paved the ground for disease states involving impairment of multiple organs, which are both modulating and depending on homeostatic calorie “accounting.” Diabetes, obesity, and/or bone brittleness now occur frequently in our society, inducing ailments affecting overall health and well-being. Therefore, an improved comprehension of how organs (e.g. bone and adipose tissue) may provide homeostasis and sound strategies to treat these diseases, thus improving health and life quality of most age categories, should be sought. The steroid and xenobiotic receptor (SXR) (pregnane X receptor = PXR) is a nuclear steroid-like hormone receptor, stimulated by hormones, steroids, drugs, and xenobiotic compounds. SXR exhibits a versatile ligand binding domain, serving as a xenobiotic sensor, regulating xenobiotic clearance from the liver and intestine. However, new and interesting functions of SXR in the regulation of inflammatory processes, cholecalciferol and bone metabolism, lipid and energy homeostasis, and cancer therapy, have emerged. Hence, the discovery and pharmacological development of new PXR modulators, like vitamin K2, represent an interesting and innovative therapeutic approach to combat various diseases, of which glucose and lipid metabolism (i.e., energy metabolism and adiposity) should be emphasized. Introduction In the past years, we have seen a plethora of research reports illuminating the link between bone physiology and energy metabolism, even though the central and sympathetic nervous systems, but also the gastrointestinal and pancreatic axes serve essential functions related to the systemic regulation of energy expenditure. It may be asserted that the fat tissue is more prominent due to its basic implication in storing and dissipating energy. During the last 25 years, a major progress has been communicated within the medical societies, featuring a modern understanding of the origin of fat tissues, their specialized characteristics and functioning, as well as the pathophysiological consequences of their impairment. These advances consequently lead to the discovery that adipose tissue metabolism is heavily coupled to the homeostasis of the skeleton. Fat tissue is able to deposit and releases energy rich compounds during feeding and fasting, respectively, and it modulates the energy homeostasis in a versatility of organs via its endocrine potential. Fat cells (adipocytes) may accumulate energy in the form of triglycerides, as well as burning it by degrading fatty acids via so-called β-oxidation. Additionally, adipocytes produce and release so-called adipokines, among which leptin and adiponectin are the most important ones. These hormones regulate both the ingestion of calories, as well as the body's sensitivity to insulin. The functional multiplicity of adipose tissue is sustained by various subtypes of adipocytes in depot storing fat. Mitochondria-sparse white adipose tissue (WAT), characterized as visceral and subcutaneous fat, stores energy as triglycerides for which the level is regulated by the body's sensitivity to insulin and the overall metabolism of glucose in the liver and skeletal muscle cells, respectively. Contrastingly, mitochondria-enriched brown adipose tissue (BAT), which in adults is positioned as discrete entities localized in the neck and other regions of the trunk of the body [1], serves to dissipate energy to sustain adaptive thermogenesis [2]. This process is facilitated by the action of uncoupling protein 1 (UCP1), stimulating a leakage of protons, in order to uncouple respiration from ATP synthesis, thus favoring heat production. The thermogenesis sustained by BAT is mastered via the central nervous system, the impact of both the signaling systems comprised by catecholamines (i.e., β-adrenergic signaling), as well as deiodinase 2 (Dio2)-facilitated thyroid hormone conversion from T4 (thyroxine) to T3 (triiodothyronine). As an extra feature, compared to its role in adaptive thermogenesis, BAT is also protecting against obesity, as well as against insulin resistance and development of diabetes [5][6][7][8]. Finally, it is worth noticing that genetic ablation of BAT in small experimental animals is resulting in diet-induced obesity, diabetes mellitus with insulin resistance, as well as enhanced blood lipids [3]. It has been asserted that BAT may originate from two sources. The classical or ordinary, preformed BAT stems from Myf5-positive dermomyotomal progenitor cells which may also yield skin and muscle, as well as functioning in so-called nonshivering thermogenesis [4]. On the other hand, Myf5-negative progenitor cells may differentiate into white adipocytes which play a role in energy storage, or to BAT-like or "beige" fat cells. The latter adipocytes demonstrate both brown and white fat cell characteristics [5]. Of major importance is the fact that BAT-like adipocytes can be transformed into WAT-like adipocytes through a plethora of mechanisms, of which a few deserves mentioning; cold exposure, endocrine action of FGF21 [6] or irisin [7], and via transcriptional regulators including FoxC2 [8], PRDM16 [9], and PPARc [10], which lead to SirT1-mediated deacetylation of the PPARc protein [11]. Beige fat possesses powerful antiobesity and antidiabetic activities. An overexpression of BAT-specific transcription factors, i.e., either FoxC2 or PRDM16 in WAT adipocytes, has been shown to protect mice from diet-induced obesity and metabolic dysfunction [9]. Furthermore, ablation of beige fat cells by adipocyte-only deletion of the transcriptional modulator PRDM16, yields experimental animals, which become prone to diet-induced obesity and ensuing insulin resistance [12]. In line with the cited article published by Lecka-Czernik et al. in Archives of Biochemistry and Biophysics, 2014 [13], we performed the following experiment: Human adipose stem cells and 3T3-L1 preadipocytes, the latter with an activated mutation (ref) in the G i2α -protein were differentiated into fully mature adipocytes as indicated by coloration with Alizarin Red. Interestingly, both adipocyte species accumulated triglycerides upon differentiation to mature adipocytes; however, exposure to vitamin K2 (MK-7) diminished their ability to turn into fully mature adipocytes. Furthermore, both adipocyte species were tested for expression of various genes differing white adipocytes from beige adipocytes. From the gene expression profile, featuring genes like the beta-adrenergic receptor β3-AR, Foxc2, PGC1α, PPARα, Dio2, UCP1, Adipoq, and Leptin, it was quite obvious that the preadipocytes in question differentiated more in the "direction" of beige, lipid (or fatty acid) metabolizing adipocytes than into white, triglyceride-storing adipocytes. Hence, it could be concluded or hypothesized that vitamin K2 (MK-7), in fact, was able to direct preadipocytes into becoming an energy-dissipating, rather than energy-storing adipocyte phenotype (see Figure 1, left and right panels). A putative model featuring this development, referring to a suggested mechanism, is given in Figure 2. Here, we have demonstrated that vitamin K2 affects the differentiation of preadipocytes to mature adipocytes, ensuring that the "end-point" phenotype may be tilted in the direction of the beige, energy "dissipating" phenotype. In their paper from 2014, Lecka-Czernik and coworkers [13] assert that an impairment in fat function correlates with a reduced bone mass and an increased incidence of fractures. But, the question posed by the authors is: does accumulation of bone marrow adipose tissue (BMAT) exert a detrimental effect on bone structure and or mineralization, and will a diminished bone mass stimulate the accumulation of BMAT? Historically, BMAT was construed as a dormant or inert type of fat that accumulated within the bone marrow in order to pose as empty space filling subsequent to involution of hematopoietic tissue. Hence, its relationship with a lower bone mass was construed as circumstantial. However, novel evidence asserted that marrow adipogenesis should be construed as a process, which is tightly bound to the differentiation of osteoblasts, since they "share" common precursor cells, and they are subjected to same modulatory signaling patterns. This yields, however, opposite end results where a positive correlation with an overall fat metabolism indicates that BMAT, in fact, actively induces bone mass loss and inferior quality. Both, over and malnutrition represent systemic changes in energy metabolism, affecting bone fat volume and bone mass. Despite the fact that adipose (overweight) individuals present with a higher body weight, the more often than not demonstrate a lower BMD. In obese older men and women, an enhanced percentage of body fat and low amount of lean (muscle) mass predicts a lower BMD and thus an enhanced frailty risk [14]. The enhanced fracture risk incurred by postmenopausal women and older men [15] is mainly due to enhanced circulatory levels of adipose tissue derived in adiponectin and proinflammatory cytokines, with a concomitant lower secretion of leptin and IGF-1. Furthermore, one will often detect lowered blood levels of 25-hydroxyvitamin D and higher serum parathyroid hormone levels in these "patients" [15]. Human adipose stem cells and mutated 3T3-L1 preadipocytes were differentiated towards mature adipocytes. Top panel: Control cells are not manipulated with differentiating medium show lack of ability to store lipids (no Alizarin Red accumulation). Cells are differentiated in the presence of insulin, as well as IBMX (an inhibitor of phosphodiesterase) accumulate lipids, while cells are also exposed to vitamin K2 (MK-7) and lose their ability to produce and store triglycerides. Bottom panel: Percentage modulation relative to the "control" stage seen with exposure of human adipocytes and mutated 3T3-L1 cells to differentiating medium versus exposure to vitamin K2 (MK-7). Cells treated with vitamin K2 exhibit significant increments in genes characterizing beige adipocytes, where the alterations in PGC1α, PPARα, PPARγ, Dio2, and UCP1 expression were more prominent. Vitamin K2 and its mechanism of action -beyond xenobiotic metabolism The steroid and xenobiotic receptor (SXR), which is synonymous with the pregnane X receptor = PXR, is characterized as a nuclear hormone receptor that is stimulated by a plethora of hormones, dietary steroids, pharmaceutically active agents, as well as xenobiotic compounds. SXR exhibits a binding domain, which diverges across mammalian species. SXR is construed as a xenobiotic sensor, facilitating xenobiotic clearance in the liver and intestine. However, newly published experiments unravel novel roles for SXR in modulating phenomena like inflammation, bone turnover, metabolism of vitamin D, lipid, and energy homeostasis, as well as cancer. The characterization of SXR as conveyer of hormone-like signals has now been recognized as a key instrument for the study of novel mechanisms, through which diet may ultimately affect health and disease. The discovery and pharmacological development of new PXR modulators, like vitamin K2, might represent an interesting and innovative therapeutic approach to combat various ailments and diseases. shows that K2, via binding to the transcription factor PXR/SXR affects several signaling molecules and/or pathways, via the β3-adrenergic system, impinging on signaling molecules like PKA, PGC1α, C/EBPβ, and Dio2, which eventually affects the activity of the uncoupling protein UCP1, which produces heat (and not ATP) from fatty acids. Not mentioned in the text is the reciprocal regulatory loop consisting of the microRNA species hsa-mir-155 and the transcription factor C/EBPβ, which can be manipulated to reinforce the "beige" phenotype of the adipocyte after differentiation from the stem cells or preadipocytes. Without elaborating on the details of how different natural products modulate the activity SXR to affect gene expression, it should be noted that St. John's wort (hypericum perforatum), vitamin E (tocopherols and tocotrienols), as well as sulforaphane and Coleus forskohlii, the latter producing forskolin which is able to stimulate adenylate cyclase activity, suffice to say that K2 emerges as an interesting "player" on the scene featuring major metabolic pathways accounting for a plethora of actions to be reckoned with [16]. One good example is the effect of SXR activation on the metabolism of cholesterol and lipid turnover, a second one is its impact on the Fox transcription factors (i.e., FoxO1 and FoxA2), and their influence on energy homeostasis [16]. FoxO1 and FoxA2 are both members of the "forkhead" family tree of transcription factors, serving critical roles in both lipid metabolism, as well as gluconeogenesis in the liver [17]. FoxO1 stimulates hepatic gluconeogenesis during fasting through the activation of gluconeogenic genes, such as PEPCK1 (phosphoenolpyruvate carboxykinase 1), G6P (glucose-6-phosphatase), as well as and insulin-like growth factor-binding protein 1. FoxA2, on the other hand serves as a key switch, representing one of the regulatory factors of the breakdown of hepatic fatty-acids during calorie-restriction (i.e., fasting). Via mammalian cell-based two-hybrid screening, it was feasible to identify FoxO1 as a coactivator of both CAR (constitutive androstane receptor)-mediated, as well as SXR-stimulated transcription [18]. FoxO1 may directly associate with CAR and SXR as a hormone-ligand-receptor complex and stimulate their transcriptional ability. And, both CAR and SXR function as corepressors of FoxO1, suppressing the FoxO1-mediated transcription by counteracting its association with its response elements within the susceptible genes. As well as obliterating the FoxO1 activity, drug-stimulated SXR and CAR species were also shown to downregulate HNF4α transcriptional power via suppression of PGC1α, thus suppressing the transcription of both PEPCK1 and G6P [19]. This indicates that metabolic turnover of both drugs and glucose, the two major functions of the liver which are regulated independently, happens to be reciprocally coregulated via communication between xenobiotic sensors on one hand, and transcription factors in the liver, on the other. When blood sugar concentrations are rendered low due to fasting or subsequent to periods of exercise, the liver funnels energy rich molecules to extra-hepatic tissues and peripheral organs via either β-oxidation or the production of ketone bodies [20]. FoxA2 is stimulating both ketogenesis and β-oxidation via enhancement of the transcriptional activity of a plethora of genes, such as mitochondrial 3-hydroxy-3 methylglutarate-CoA synthase 2 (HMGCS2) and carnitine palmitoyltransferase 1A (CPT1A) in energy-depleted conditions (fasting), or subsequent to periods where extensive exercise prevails [21]. FoxA2 is phosphorylated and thus inactivated by the Akt pathway, and serves to decrease lipid turnover in response to insulin. Treatment with other drugs such as barbiturates has been shown to suppress lipid metabolism in an insulin-independent fashion [22]. Furthermore, it was reported that SXR may crosstalk with FoxA2 in order to induce the repression of lipid turnover in livers of fasting mice. By applying wild-type and SXR-/-animals, it was demonstrated that treatment with PCN (pregnenolone-16d-carbonitrile) diminished the steady-state mRNA levels of HMGCS2 and CPT1A in control animals, but not in SXR-deficient mice. When conducting biochemical and cell-based analyses, it was shown that SXR markedly downregulates the ability of FoxA2 to bring about activation of the HMGCS2 and CPT1A genes. SXR that serves as the ligand binding domain will associate directly with the DNA-binding area of FoxA2, and the ensuing interaction will halt the "coupling" between FoxA2 and its response in DNA elements. The communication between SXR and FoxO1/FoxA2 signifies that SXR, apart from being a modulator of drug metabolism in the liver, is also serving as an important regulator in hepatic in glucose and energy homeostasis. Hence, since vitamin K2 binds to and enhances SXR-mediated transcription, it may serve as the natural "drug" to ingest, when one is aiming to treat insulin resistance and type II diabetes mellitus. The impact of vitamin K2 via SXR on the metabolic functions in general, is described in more detail in the forthcoming paragraphs ("SXR in glucose handling" and "SXR in lipid turnover") [22]. In Figure 3, we show our own experiments indicating that the adipocyte phenotype, characterized by the expression of the transcription factor PPARγ, is heavily dependent on the presence of FoxO1 and FoxA3, respectively, since siRNA against either of them completely obliterate the stimulatory effect of vitamin K2 obtained through its binding to SXR. However, the literature in general advocates a plethora of PIK3/Akt-stimulated transcription factors of the FoxA and FoxO families in the cascade of insulin/IGF-1 mediated signaling in general. However, the present data show, for the first time that vitamin K2 (i.e., the MK-7 variant) directly stimulates FoxO1 and FoxA3, short-cutting the insulin/IGF-1 activation cascade. The assertion to be drawn is simply: Vitamin K2 may directly fortify the action of insulin, thus ensuring a better glucose homeostasis, as well as protection from a detrimental turnover of lipids and protein structures of the body. The molecular mechanisms of SXR-mediated gene repression Currently, SXR has been described as a repressor of gluconeogenic gene expression, some of which are glucose-6-phosphatase (G6Pase) and phosphoenolpyruvate carboxykinase 1 (PEPCK1), thus implicating vitamin K2 in metabolic (energy-related) reactions taking place in the liver [23][24][25]. The SXR-vitamin K2 complex may therefore interfere directly with transcription factors and thus be rendered responsive to both insulin and glucagon. Consequently, one would observe a release of transcription factors with their coactivators from target genes, which would lead to a general subactivation of gene transcription. Furthermore, it has been asserted that SXR represses the CYP-genes through interference with the vitamin D receptor, VDR [26]. Hence, in tissues short of vitamin D3, liganded SXR would associate directly to vitamin D response elements (VDREs) and modulate transcription. Therefore, it may be asserted that vitamin K2 and vitamin D, as well as vitamin A (via VDR and RXA, respectively), and many other transcription factor like molecules (e.g., PPARs, FXR, LXRα, LRH-1 = NR5A2, RXR), when associated with their ligands, may act synergistically on gene transcription in general [16,[27][28][29]. Hence, it is not straightforward to predict the net results of a certain combination of liganded transcription factors on biological processes. However, there are several excellent reports on the impact of vitamin K2, in association with the nuclear factor SXR, on cellular metabolism. Involvement of SXR in metabolic functions Xenobiotics are able to enhance SXR-mediated expression of xenobiotic-metabolizing enzymes in both the liver and intestines. Even though such a modulation normally serves to detoxify the xenobiotics in question, these "alien molecules" enhance the production of intermediates, which confer harmful attack of tissues on the body [30,31]. Additionally, SXR affects the balance of endobiotics (e.g., steroid hormones, cholesterols, and bile acids) aided by the same biochemical pathways. Hence, an SXR activation will consequently stimulate a plethora of physiological responses, i.e., in the liver, which plays an important role in the processing of, among many substances, glucose and lipids. Disruption of their metabolic fate may result in diseases, of which type II diabetes (T2DM) and obesity are most frequently encountered. Newly published studies of SXR-KO and SXR-humanized animals clearly shed light on the metabolic functions of SXR in man. SXR and its role in energy metabolism of the liver The liver provides energy sources to the rest of the body, i.e., carbohydrates and lipids are catabolized in order to fuel both central and peripheral tissues and organs. The effect of SXR on hepatic energy turnover was discovered with the aid of SXR-KO mice [25,32,33]. SXR in glucose handling The pancreatic hormones (insulin and glucagon) reciprocally regulate the blood glucose level via transcriptional processes, where rate-limiting enzymes like G6Pase and PEPCK1 in the glucose metabolism play a decisive role [34][35][36]. The glucose-6-phosphatase dephosphorylates glucose-6-phospate (G6P), constituting the endpoint of both the glucose-forming and glucose-utilizing reactions, while PEPCK1 converts oxaloacetate to phosphoenolpyruvate (PEP) in the gluconeogenic reaction. These enzymes are instrumental in controlling blood glucose levels. During fasting and/or prolonged exercise, glucagon dominates glucose metabolism by activating the cAMP/PKA signaling pathway [35,37,38]. When phosphorylated by PKA, the cAMP-response element-binding protein (CREB) stimulates both the G6Pase and PEPCK1. However, insulin acts in an opposite manner, and in response to high blood glucose levels, insulin is secreted and activates the phosphoinositide 3-kinase (PI3K)/Akt signaling pathway [39]. Thereafter, Akt phosphorylates and inactivates the transcription factor forkhead box O1 (FoxO1). FoxO1, being a key regulator of glucose turnover, subsequently stimulates the insulin response sequence (IRS)-bearing genes G6Pase and PEPCK1 [40]. When phosphorylation by Akt, FoxO1 cannot longer translocate to the nucleus, it is rapidly acetylated and therefore loses its activity [41,42]. But, when using SXR-KO and SXR-humanized animals, it has been demonstrated that SXR is an important regulator of xenobiotic-dependent glucose turnover in the liver. Treatment of mice with potent SXR activators consistently leads to lowered blood glucose levels in laboratory animals [32]. Furthermore, it was shown that there existed so-called "cross-talk" between SXR and FoxO1 as a molecular mechanism underlying the downregulation of glucose metabolism [18]. It was reported by Kodama and Negishi that liganded SXR directly interacts with phosphorylated CREB in primary hepatocytes [25], and that SXR disturbs the binding of CREB to CRE with an ensuing repression of the CREB-mediated transcription of G6Pase and PEPCK1 genes. Looking at hitherto available information, it can be asserted that SXR, by "targeting" a plethora of factors modulated by insulin and glucagon, leads to an activation of many genes being functional in the intrinsic regulatory machinery maintaining serum glucose levels within "healthy" limits. SXR in lipid turnover The liver provides lipid-derived energy-rich compounds to different parts of the body. Hepatic lipid metabolism is controlled by the net influence of the "reciprocal" hormones insulin and glucagon, as well as by nutritional conditions. Some 10 years ago, Kodama and Negishi, along with Nakamura [25,32], published that treatment with PCN (an activator of SXR) decreased the mRNA levels of carnitine palmitoyltransferase 1a (CPT1a) and 3-hydroxy-3-methylglutarate-CoA synthase 2 (Hmgcs2) in livers of starved wild-type mice, but not in SXR-KO mice [32]. CPT1a is instrumental in the overall mitochondrial β-oxidation by funneling long-chain fatty acids into mitochondria [43], and the mitochondrial enzyme HMGCS2 facilitates the initial reaction of ketogenesis [44]. Additionally, the stimulation of SXR by PCN has been demonstrated to enhance the mRNA steady state level of stearoyl-CoA desaturase 1 (Scd1) in hepatic tissue of starved wild-type experimental animals. SCD1, which serves as a key enzyme in hepatic lipogenesis, facilitates the rate-limiting step in the synthesis of unsaturated fatty acids [45]. The plasma concentrations of 3-OH-butylate were decreased, while the hepatic level of triglyceride (TG) was increased by the PCN treatment in wild-type mice during assay conditions. However, neither TG nor cholesterol levels in the blood were altered in those animals, despite the fact that there was a significant rise in TG accumulated in their liver. Hence, as a means of survival during fasting, SXR is thought to slow down hepatic lipid turnover by repressing β-oxidation and ketogenesis, while stimulating the transcription of lipogenic enzymes, in much the same way as induced by insulin. The Akt-regulated forkhead transcription factor FoxA2 which serves as a facilitator of insulin-dependent modulation of β-oxidation and ketogenesis, enhances expression of both the CPT1a and HMGCS2 gene, respectively [21,46]. It is well known that Insulin activates the PI3K/Akt signaling pathway to phosphorylate FoxA2, in order to translocate it from nucleus to cytosol, thereby downregulating both the genes. And, it has been asserted that a direct interaction between SXR and FoxA2 serves as the mechanism, by which SXR represses the transcription of CPT1a and HMGCS2 in the liver [32]. A plethora of transcription factors and coregulators have been asserted to serve as modulators of hepatic lipid metabolism, e.g., the peroxisome proliferator-activated receptors (PPARs), the liver X receptor α (LXRα), as well as the sterol regulatory element-binding proteins (SREBPs) [47]. The expression of SREBP1c, which is construed as the dominant regulator of hepatic lipogenesis, is under the control of LXRα, and mediates the insulin-and fatty acids-dependent responses of lipogenic genes such as fatty acid synthase (FAS), acetyl-CoA carboxylase 1 (ACC1), stearoyl-CoA-desaturase-1 (SCD1), and fatty acid elongase (FAE). SXR is believed to upregulate lipogenesis in the liver, independently of SREBP1c action, and it is not deemed to be associated with the steady state expression levels of both the Fas and Acc1 genes. Among the cluster of lipogenic genes, Cd36 (cluster of differentiation 36) is deemed to be serve as a direct target of SXR in the liver. And, upon stimulation by ligands, the receptor is believed to become recruited to a DR3-type SXR response element within its promoter region of the liver of experimental animals [48]. Furthermore, SXR has been asserted to serve as a link, facilitating the upregulation of the Pparγ-gene, which functions as a strong regulator of lipid-synthesizing enzymes [48]. Such a cross-talk involving nuclear receptors should confer a significant impact on the body's lipid homeostasis. Our data are in line with the published literature, however, it should be asserted that SXR probably affects a larger spectrum of FoxO and FoxA species than those presented in this review. In this way, one might speculate that SXR is able to recruit a "moving" representation of these transcription factors simultaneously, and that the net effect on various cell phenotypes depends on: (1) the distribution of FoxOs and FoxAs at any time within the cell or tissue, as well as (2) the epigenetic machinery or "make up" at any time within the same cells or tissues. The effect of vitamin K2 on other genes related to metabolic processes in the cell In 2009, Slatter [49] and coworkers published a paper, featuring oligonucleotide microarrays with the intention to reveal the heterogeneity of drug metabolism associated gene expression in liver tissue from healthy humans. Their intention was to define clusters of so-called "absorption, distribution, metabolism, and excretion" = ADME genes to define subgroups of coregulated genes. When analyzing the gene sets, they discovered distinct patterns of "parallel" gene expressions featuring gene "clusters", which proved to be modulated by the nuclear receptor SXR. So called "fold range metrics and frequency distributions" were applied in order to reveal the variability of solitary PKDM genes. The most variable gene entities chiefly correlated to: (1) drug metabolism, (2) intermediary metabolism, (3) inflammation, and (4) cell cycle control. Unique expression patterns of these genes allowed for a further correlation with a parallel expression of a plethora of other genes. Of major interest was the identification of SXR responsive genes. A comprehensive list of these genes can be found in the article, however, quite a few of which are related to metabolic processes in the cell. The genes are the following (in alphabetical order): CLOCK, DUSP7, GCDH, IGFBP2, MAP2K2, NUCB2, OGT, PFKB1, PTPN11, and SLC16A2. By "looking up" current descriptions of the genes in "Gene-Cards", the following features of these SXR-sensitive genes were obtained (of which parts of their description is cited as presented): Name of gene Description of cellular function(s) http://www.genecards.org/cgi-bin/carddisp.pl?gene=NR1I2 CLOCK Clock circadian regulator (The protein encodes a transcription factor, and serves as DNA-binding histone acetyl transferase). Interpretation: Polymorphisms in this gene may be associated with obesity and metabolic syndrome. CLOCK normally regulates gene products (proteins) in an optimal fashion, adapted to diurnal demands on the body related to food ingestion, physical activity and recreation/sleep. DUSP7 Dual-specific phosphatase (DUSPs) constitutes a large subgroup of cysteine-base proteintyrosine phosphatases characterized by their ability to dephosphorylate both tyrosine and serine/ threonine residues. Interpretation: DUSP7 may function as a modulator of cellular exposure to insulin and growth factors, ensuring energy homeostasis within optimal limits. GCDH The protein encoded by this gene belongs to the acyl-CoA dehydrogenase family. It catalyzes the oxidative carboxylation of glutaryl-CoA to crotonyl-CoA and CO 2 in the degradative pathway of L-lysine, L-hydroxylysine, and L-tryptophan metabolism. Interpretation: GCDH is involved in the stability of mitochondria, and hence energy metabolism in general. IGFBP2 Insulin-like growth factor binding protein, type 2. This protein inhibits IGF-mediate growth. Interpretation: A reduction in IGFBP2 may be responsible for organ hyperplasia and the development of neoplasia (cancer). MAP2K2 This MAP-kinase catalyzes the concomitant phosphorylation of a threonine and a tyrosine residue in a Thr-Glu-Tyr sequence located in MAP-kinases. It activates the ERK1 and ERK2 MAPkinases (by similarity). Interpretation: This kinase may block tumorigenesis and normalize energy metabolism via MEK1/2 and the FoxO-and FoxA-family of transcription factors. NUCB2 Anorexigenic peptide; seems to play an important role in hypothalamic pathways regulating food intake and energy homeostasis, acting in a leptin-dependent manner. Interpretation: Appetite regulator fortifying the effect of leptin, but independent of the size of fat depots. OGT Glycosylates a substantial and diverse amount of proteins, encompassing species like histone H2B, AKT1 and PFK (phosphofructokinase). It can modulate their cellular processes through cross-talk between processes like glycosylation and phosphorylation, or via proteolytic processing. Involved in insulin sensitivity in muscle cells and adipocytes by glycosylating components of insulin signaling, blocks phosphorylation of AKT1, stimulates IRS1 phosphorylation, as well as attenuating insulin signaling. Interpretation: Modulator of insulin and IGF-1 signaling/sensitivity to maintain a healthy muscle tissue fat mass and distribution. PFKFB1 Encodes a member of the family of the bifunctional 6-phophofructo-2 kinase: fructose-2, 6-bisphosphatase enzymes. These enzymes form homodimers, which catalyze the synthesis, as well as the degradation of fructose 2, 6-bishosphate, via independent catalytic domains. Fructose-2, 6-bisphosphate serves as the activator of the glycolytic pathway, and as the inhibitor of the gluconeogenetic pathway. Interpretation: Regulating fructose-2,6-bisphosphate levels through the activity of this enzyme is thought to regulate glucose homeostasis. Multiple alternatively spliced transcript variants have been found for this gene. PTPN11 PTP (protein tyrosine phosphatase) is a member a large family of phosphatases and plays a regulatory role in various cell signaling events that are important for a diversity of cell functions, such as mitogenic activation, metabolic control, transcription regulation, and cell migration. Interpretation: Because of the activating effects of PTPN11 on ERK (extracellular signal regulated kinase), a lack of PTPN11 activation may lead to adiposity, diabetes, and hyperleptinemia. SLC16A2 Very active and specific thyroid hormone transporter molecule. Stimulates cellular uptake of thyroxine (T4), triiodothyronine (T3), reverse triiodothyronine (rT3), and diiodothyronine. Interpretation: Lack of SLC16A2 activation may lead to a reduction in uptake and biological functions of T4 and T3, which is associated with adiposity, diabetes, and hyperleptinemia.
2019-04-02T13:05:55.901Z
2017-03-22T00:00:00.000
{ "year": 2017, "sha1": "2e0f9ddf126a722fe06cd96aa0ab007e618eb60e", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/54263", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "64e448eaefe4ae4a2e68eadb11bcb3a4c0fc278d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
268738613
pes2o/s2orc
v3-fos-license
Absorbent hygiene products disposal behaviour in informal settlements: identifying determinants and underlying mechanisms in Durban, South Africa Background Within South Africa, many low-income communities lack reliable waste management services. Within these contexts, absorbent hygiene product (AHP) waste, including nappies (diapers), are not recycled, and are often dumped, ending up in watercourses and polluting the local environment. The structural barriers to collection which have been well explored, however the behavioural determinants of safe disposal for AHPs remains poorly understood. The purpose of this study is to determine the psycho-social factors driving AHP disposal behaviour for caregivers, while identifying potential underlying mechanisms (such as mental health), which may be influencing disposal behaviour, with the intention of informing a future, contextually appropriate and sustainable, collection system. Methods The cross-sectional study was conducted within three low-income communities located within eThekwini Municipality (Durban), South Africa. The study included a pre-study and a quantitative survey of 452 caregivers, utilising the RANAS approach of behaviour change. The quantitative questionnaire was based on the RANAS model to measure psycho-social factors underlying sanitary disposal of AHPs. Mental health was assessed using the Self-Reporting Questionnaire (SRQ-20). Statistical analysis involved regressing psycho-social factors onto disposal behaviour and exploring their interaction with mental health through a moderation model. Results Our findings suggest that one third of caregivers do not dispose of nappies sanitarily, despite intent (86.9%). Regression analysis revealed ten psycho-social factors which significantly predict the desired behavioural outcome, the sanitary disposal of AHPs. Caregivers with poor mental health were less likely to dispose of AHP sanitarily, which reflects previous research linking poor mental health and the impairment of health-related daily activities, particularly within vulnerable groups. Specifically, several psycho-social factors underlying were moderated by poor mental health, the prevalence of sanitary disposal of AHPs depended on mental condition of caregiver. Conclusions Our findings confirmed the link between poor mental health and unsanitary AHPs disposal. This is especially relevant because poor mental health is common within South Africa. Addressing mental health problems within these communities is an essential step to providing sustainable waste management services. The findings informed an intervention strategy to implement a future collection system for these communities, and similar low-income or informal contexts within South Africa. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-024-18396-y. Introduction Across the globe, inequality underpins access to waste management systems, structuring who can or cannot utilise or provide sustainable services [1][2][3].In South Africa, the most unequal country in the world [4], this is particularly the case, where nearly half the population lacks access to municipal waste collection [2,5].Although the democratic South African state has made great strides over the past two decades to extend service provision to previously un-serviced areas, this gap remains the most prominent in historically non-white communities, including traditionally governed rural and peri-urban land, as well as the multitude of informal settlements which have proliferated within, and on the margins of, South Africa's cities [6,7].This inequality contributes to numerous health and safety impacts on affected communities, who are burdened with unclean spaces and riskier disposal options, while contributing to the leakage of solid waste into the natural environment, including our rivers and oceans (Kalina et al., 2022a).Within the City of Durban, located on South Africa's eastern coast and part of the larger eThekwini Metropolitan Municipality, Municipal officials have embarked on efforts to 'upgrade' informal settlements, and provide basic services, including waste collection [9,10].However, financial constraints within the municipality, the logistical hurdles of providing services within informal spaces, and the inability for poor residents to pay, has severely hampered the provision of reliable waste management services [11]. Absorbent Hygiene Products (AHPs), which include disposable tissues, diapers, and feminine hygiene products, are essential to human dignity and hygiene, especially for women, the elderly, the ill, people who menstruate, parents, and other caregivers.Currently, few end-of-life (EoL) options exist for AHP waste, especially within the Global South, where the majority of AHP waste is disposed of within dumpsites or landfills [12].Moreover, because the use of AHPs is expected to rise, it is anticipated that AHPs will become a growing waste management challenge, particularly in Southern cities, with less robust Municipal Solid Waste (MSW) systems, and which, as the ongoing Covid-19 pandemic has demonstrated, are less able to manage increases in potentially hazardous waste [13][14][15].Within South Africa, AHP waste, including nappies (diapers), are not recycled, and are often dumped, especially in low-income communities (Schenck et al., 2019;Schenck et al., 2022).Moreover, the disposal of AHPs, and feminine hygiene products in particular, is complicated by taboo or stigma, which in many cultures, including within South Africa, is attached to menstrual blood, forcing women into often hidden or unsafe disposal pathways for these items, such as in the bush or down the toilet (Kalina et al., 2022b;Roxburgh et al., 2020).As a result, improperly disposed AHPs are a significant source of waste leakage into the natural environment, where in Durban, especially in low-income communities, AHP waste often litters hillsides and clogs storm water drains, from where it washes into our rivers, and eventually the sea (Kalina et al., 2022a).Given the challenges of providing solid waste management services and the difficult socio-economic conditions within these contexts, what drives AHP disposal behaviour, and what underlying mechanisms may be influencing individual disposal decisions? Moreover, previous research from the World Health Organisation (WHO) [20] has suggested that underlying mechanisms, including chronic illness and mental health may be impacted by waste within the environment, while influencing waste management behaviours of affected individuals.Furthermore, previous research from within Southern Africa has suggested that poor mental health and depression can impair daily activities in vulnerable groups, including children and youth [21][22][23].This connection is particularly relevant within South Africa, where the prevalence of mental disorders is particularly high (30.3%)[24]. The purpose of this study is to determine the psychosocial factors driving AHP disposal behaviour for mothers and caregivers, while identifying potential underlying mechanisms (such as mental health), which may be influencing disposal behaviour, with the intention of informing a future, contextually appropriate and sustainable, collection system.Although there has been some investigation of behaviour factors driving recycling within South Africa [25,26], psycho-social evaluation has not yet, as far as we know, been utilised to investigate AHP disposal. Specifically, we ask: (1) which psycho-social factors are determinants for sanitary disposal of AHPs among caregivers in low-income contexts, and (2) how does mental health influence caregivers' sanitary disposal of AHPs?This work directly responds to a knowledge gap on the behavioural determinants of safe disposal and collection of AHPs, both in South Africa and globally.To identify the behavioural factors associated with caregiver's sanitary disposal and collection of AHPs behaviour, this study utilised the Risks-, Attitudes-, Norms-, Abilities-and Self-regulation (RANAS) approach of behaviour change, informed an intervention strategy to implement a future collection system for these communities, and similar lowincome or informal contexts within South Africa. a methodological approach for developing, implementing, and evaluating behaviour change (BC) strategies, that has been utilised in many South and low-income contexts similar to our case study [27][28][29][30][31]. Study design, location and period This cross-sectional study included a qualitative prestudy and a quantitative survey in low-income communities related to Durban, South Africa.Data collection took place in three pre-selected low-income communities: Mzinyathi, Johanna Road Informal Settlement, and Blackburn Village in eThekwini Municipality from September to November 2022.Each are low-income settlements within eThekwini municipal boundaries (Fig. 1).Both Johanna Road and Blackburn Village are informal settlements, which are housing areas that have been illegally built on municipal land, giving the appearance of impermanence, but over time, have become established communities.Mzinyathi, by contrast, is a sprawling, peri-urban settlement on the fringe of the municipality.Although they share similarities, the communities are differentiated in terms of housing construction density, settlement size, accessibility from the developed urban commercial-industrial centres, and surrounding land use.However, all three suffer from a variety of service delivery challenges, as Johanna Road and Blackburn Village are informal, and do not receive regular municipal services, and Mzinyathi is located in traditional authority land, and likewise is not serviced by the municipality as residents do not pay rates.As a result, waste management is a significant challenge in all three communities, with an immense amount of waste leaking into the natural environment from improper disposal and a lack of collection.Moreover, all three are located within close proximity of natural watercourse within a few miles of the ocean, hence increasing the environmental beneficial impacts of the study. Study participants, sampling methods and sample size The study participants were caregivers of children up to 5 years.The pre-study involved three focus group discussions (FGD's, N = 30) with caregivers (all of them were mothers of a child up to five years).A pre-study was conducted to inform and develop the quantitative survey.The participants for the quantitative survey were selected using a random route method (every second house).In total, N = 452 caregivers were recruited for our research study.Participation in the study was completely voluntary.Written, informed consent was obtained from each participant.No individuals under the age of 18 were included and the study did not encounter illiterate participants. RANAS The RANAS model has been developed using various psychological theories [32,33].The model consists of five psychosocial factor blocks.Risk factors include health related knowledge, perceived vulnerability, and perceived severity of the target behaviours.Attitude factors include beliefs about the costs and benefits of a target behaviour and feelings arising while performing the target behaviour.Norm factors comprise perceived social influence, such as behaviour of others, others' approval, and personal importance.Ability factors include confidence in performance of a particular behaviour.Self-regulation factors cover management of conflicting goals and barriers, commitment, and remembering to perform the target behaviour.Furthermore, the RANAS model considers not only psycho-social factors underlying intention, habit and behaviour, but also three domains of contextual factors: social, personal, and physical contexts.Culture, social relations, laws and policies, economic conditions, and the information environment constitute the social context.The natural and built environments comprise the physical context.Age, gender, education, individual differences in the physical and mental health of the person and are part of the personal context (Fig. 2). Questionnaires and measures The structured, face-to-face interviews were conducted in isiZulu.For those are unable to read or write, the consent statement was read aloud and individuals provided consent by making a mark on the subject signature line. Participants were provided with a unique identifying number, and data were anonymized during data analysis.Data were accessed only by the authors. Statistical analysis of data The statistical analysis of data was conducted using IBM SPSS 28 Statistics software and the PROCESS macro for SPSS [35].To identify the most influential behavioural determinants, psycho-social factors of the RANAS model underlying target behaviour (independent variables) were regressed onto the sanitary disposal AHPs as outcome (the dependent variable).Correlations were used to investigate associations between study variables such as sanitary disposal of AHPs, and mental health.T-tests and effect size calculations were used to compare means between poor and good mental health groups [36].A regression analysis method, PROCESS (see [37]) was applied to calculate moderation model.The moderation model was used to test for interaction (when two variables influence each other's effects).Our moderation model included mental health as the moderator (M), sanitary disposal of AHPs as the outcome (Y), and psycho-social factors as predictors (X).Only significant factors from linear regression analysis were tested in a moderation model.Moderation analysis was used to test the interaction between the moderator M (mental health) and predictors X (psycho-social factors) in a model with outcome Y (sanitary disposal of AHPs).With evidence that X's effect is moderated by M, the analysis should confirm X's effect on Y at various values of the moderator (Scale: 0-20 in our model). Prevalence of common mental disorders (CMD) To detect the group of caregivers who are at risk of developing common mental disorders (CMD), the SRQ-20 self-reported instrument was used [34] The SRQ-20 is a reliable and valid CMD measurement which consists of 20-item rating scale with a score range from 0 to 20 (the cut-off point ≥ 7).The results revealed that prevalence of CMD among caregivers (N = 450) in three study communities was 20.4% (N = 92) (Fig. 2).Further t-test mean comparison analysis revealed significant differences between women and men t(61.05)= -2.41,p =.019.Specifically, women (N = 408; M = 4.22 (SD = 15.04)) reported significantly more mental health related symptoms (95%-CI[-2.57,-0.24]) than men (N = 42; M = 2.81 (SD = 3.42)). Factual and action knowledge about waste relationship to health and prevention Only 28.3% (N = 130) of respondents answered that sanitary disposal of AHP means to 'dispose nappies in a designated bin/ separate plastic bag for nappies' and 35.9% (N = 151) 'in a black plastic bag' (Table 2). Use and sanitary disposal of AHPs From 452 caregivers, 93.1% (N = 421) reported that they use child nappies.Only 18.4% (N = 83) of the respondents reported that in general they dispose AHPs in a designated/ separate plastic bag for nappies, and 58.4% (N = 264) in a black plastic bag.Furthermore, only 17.9% (N = 81) of respondents answered that last times they disposed AHPs 'in a designated bin/ separate plastic bag for nappies', however 49.8% (N = 225) disposed AHPs 'in a black plastic bag' (Table 3). Behavioural determinants To investigate which psycho-social factors are determinants for sanitary disposal of AHPs among caregivers we used linear regression with sanitary disposal of AHPs behaviour as the dependent variable and the RANAS psycho-social factors as independent variables.The regression analysis revealed that ten psychosocial factors significantly predicted sanitary disposal of AHPs: The model explained a variance of 45.6% in the sanitary disposal of AHPs behaviour (Table 5).A higher level of sanitary AHPs disposal was significantly related to perceived vulnerability (β = 0.239, p =.000), and factual knowledge about the links between health and waste (β =-0.086, p =.033).Affective beliefs, such as feeling proud, stress free, like, or happiness (β = 0.214, p =.010), beliefs about prevention, safe and clean environmental (β =-0.159, p =.010) connected to the sanitary disposal of AHPs also significantly predicted sanitary disposal of AHPs behaviour.Social norm (personal obligation) (β =-0.112, p =.049) significantly predicted higher frequency of sanitary disposal of AHPs as well.Action knowledge (how-to-do) (β =-0.187, p =.000), self-efficacy in a hurry, which represents confidence in performance (β = 0.142, p =.035), recovering after disruption (β = 0.171, p =.001), action control/ planning (β =-0.139, p =.001) and remembering to dispose sanitarily AHPs were significant predictors of target behavioural outcome (β = 0.302, p =.000). These results suggest that by enhancing any of the ten significant psycho-social factors, while controlling for others (all other factors hold constant), an increase in the safe disposal of AHPs among caregivers can be expected.Specifically, an increase in the safe disposal of AHPs by 23.9% is anticipated among caregivers who recognize the health risks of unsafe AHP disposal (perceived vulnerability 1).Additionally, an increase in the safe disposal of AHPs is expected from 2.3% of caregivers who are aware of the links between waste and health (factual knowledge), 38.8% who understand how to safely dispose of AHPs (action knowledge), and 22.4% who believe that safe disposal of AHPs prevents diseases (beliefs about prevention, safe and clean environmental).A further increase by 29.8% is anticipated among those who experience positive feelings (affective beliefs), and approximately 13.1% among those who feel a personally obliged (personal obligation) to dispose of AHPs safely.Moreover, an improvement in the safe disposal of AHPs by 14.2% is likely among caregivers who are confident in their ability to correctly dispose of AHPs even in a hurry (confidence in performance in a hurry) and 19.2% among those confident in their ability to continue safe practices even when faced with obstacles (confidence in recovery).An increase by 12.9% is also expected among caregivers who are attentive (action control) and 33.8% by those who remember (remembering) to dispose of AHPs safely. Interaction effects between psycho-social factors and mental health on behavioural outcome To investigate whether mental health influence the relationship between relevant psycho-social factors and sanitary disposal of AHPs, correlations (Spearman), and moderation analysis using PROCESS for SPSS 28 were applied [37].Our moderation model included mental health as moderator (M), sanitary disposal of AHPs as outcome (Y), and psycho-social factors as predictors (X).Only significant psycho-social factors from linear regression analysis were tested. To investigate the relationship between mental health and behavioural outcome, we used correlation analysis.The results revealed a significant positive relationship (Spearman correlation) between mental health and higher behavioural frequency to not dispose sanitarily AHPs (r =.099*).Caregivers with poor mental health were more likely to not dispose sanitarily AHPs. Five moderation models showed significant interaction effects between mental health and psycho-social factors: positive feelings, perceived vulnerability, belief about prevention, safe & clean environment, confidence in performance, and remembering sanitary disposal of AHPs. Moderation analysis revealed significant interaction effects between mental health (M) and psycho-social factor positive feelings (X) on sanitary disposal of AHPs as an outcome (b = 0.0283, 95% CI [0.0056, 0.0510], t = 2.45, p =.0148).Mental health moderated the effects of psychosocial factor positive feelings on sanitary disposal of AHPs (Fig. 3). Interpretation of results This study initiated an interdisciplinary exploration of psycho-social factors and underlying mechanisms, such as mental health, influencing caregivers' behaviour regarding the collection and sanitary disposal of AHPs in three low-income communities within eThekwini Municipality, Durban, South Africa.By integrating approaches from psychology, geography, engineering, and economics, our research aimed not only to map the quantitative waste generation and dumping hotspots but also to develop and implement behaviour change (BC) intervention strategies for enhancing the sanitary disposal of AHPs and initiating an AHP collection and recycling pilot. Our findings revealed a concerning trend: approximately one-third of caregivers do not practice sanitary disposal of child nappies, despite a high intent reported for future sanitary disposal (86.9%) and collection (84.6%) practices.This discrepancy underscores a gap between intention and behaviour, potentially exacerbated by socio-economic constraints and mental health challenges.Notably, the study highlights a significant prevalence (20.4%, every 5th caregiver was affected) of poor mental health among caregivers, with women being significantly more affected than men.This aligns with broader research indicating the negative impact of mental health on daily health-related behaviours, especially in vulnerable populations within low-income contexts.Comparatively, our findings resonate with studies from Malawi [21][22][23], yet they also underscore the critical need for targeted mental health interventions within BC strategies, a novel insight that adds depth to the existing literature on waste management and health behaviours. Our application of the RANAS Model to explained a significant portion (45.6%) of the variance in sanitary disposal behaviours which reaffirms its utility across diverse contexts, particularly in low-income countries (see publications: https://www.ranasmosler.com/publications).The identification of key determinants provided a nuanced understanding of the behavioural ecosystem surrounding AHP disposal.The most important determinants of sanitary disposal and collection of AHPs were perceived vulnerability about personal health and environmental risks, health related factual knowledge, positive feelings towards sanitary disposal and collection of AHPs, beliefs about prevention, safe and clean environment, personal obligation, action knowledge (how-to-do), confidence in performance (hurry and recovering), action control/planning and remembering of sanitary disposal and collection of AHPs.Consequently, by targeting those psycho-social factors with BC interventions we expect higher frequencies of sanitary disposal of AHPs among mothers and caregivers after the intervention.This interdisciplinary analysis not only validates previous findings but also reveals psycho-social factors that can inform more effective BC interventions. By analysing the role of mental health, our study contributes fresh insights into the moderating effects of mental health on environmental and health behaviours, advocating for a more holistic approach to intervention design.Our study results are in line with previous research that mental health moderates the effects of several psycho-social factors on target behaviour [21][22][23].That is, the prevalence of targeted behaviour depends on the mental state of caregivers.While this relationship was positive for all participants, it was more positive among mothers and caregivers with better mental health.The pronounced impact of mental health on sanitary disposal behaviours underscores an urgent need for integrated BC strategies that address mental health, particularly among women.Our findings suggest that targeting psychosocial factors, including health knowledge, environmental beliefs, and personal obligations, could significantly enhance sanitary disposal practices.However, the integration of mental health interventions presents a novel pathway to bolstering these efforts, potentially offering a blueprint for similar initiatives globally. In summary, our investigation extends beyond the mere quantification of waste and mapping of dumping hotspots to uncover the deeply entrenched psycho-social and mental health factors influencing AHP disposal behaviours in low-income communities.By highlighting the critical role of mental health and providing a comprehensive analysis of behavioural determinants, our study not only corroborates existing research but also charts new directions for future studies and policy.The insights derived from this interdisciplinary effort offer a valuable contribution to the ongoing discourse on sustainable waste management, mental health, and community resilience, steering towards more informed and effective behaviour change interventions. Limitations The study's communities were chosen through purposive sampling, focusing on three specific communities in relation to the city of Durban.Consequently, the insights obtained are closely linked to these communities, limiting the generalizability of our conclusions across different South African regions or other socio-economic contexts.This sampling approach, while beneficial for in-depth, context-specific understanding, may not reflect the full spectrum of experiences and behaviours present in varied settings. Furthermore, the study's emphasis on psycho-social factors, though comprehensive, might not have captured all potential variables influencing the sanitary disposal of AHPs.The complex interplay of economic, cultural, and infrastructural factors also deserves attention, as these could significantly affect the implementation and effectiveness of BC interventions in diverse communities. Future research should consider expanding the geographic scope of study to include a wider range of communities, utilizing random sampling methods where feasible to enhance the representativeness of the findings.By addressing these limitations and following the outlined future directions, subsequent research can build upon our findings, offering deeper insights and more robust recommendations for improving waste management practices, mental health, and community resilience across varying contexts. Practical implications The study underscores the need for a comprehensive intervention strategy targeting critical psycho-social factors.Additionally, the intervention should leverage the most trusted communication sources identified by participants, including family, friends, and local media, to effectively disseminate behaviour change messages.Moreover, the successful implementation of behaviour change strategies necessitates not only tailored communication but also the provision of essential infrastructure, such as bins and collection systems, and the transformation of dumpsites into community spaces, thereby fostering a holistic approach to promoting sanitary disposal practices.Furthermore, addressing mental health is crucial, recognizing that the psychological well-being of caregivers involved is essential for the sustained success of these practices. BC intervention strategy for sanitary disposal and collection of AHPs The study results ( A1 in Annex) and the most trusted communication sources (Table A2 in Annex). Additional to evidence-based BC strategy, infrastructure such as bin and collection system should be provided.Furthermore, the dump sides should be cleaned before the intervention and community-based incentives (i.e.transformation of the dump sides into green spaces) should be implemented parallel to or after BC strategy implementation. Mental Health intervention The study results indicate the importance of addressing mental health among caregivers for the effective implementation of sanitary disposal practices.As a practical implication, the Problem Management Plus (PM+) program [38], suggested by the WHO, offers a feasible and scalable solution to address the acute shortage of mental health services in low-and middle-income countries (LMICs).PM + is a low-intensity, transdiagnostic psychological intervention that can be delivered by trained lay helpers, effectively bypassing the barriers of limited funding, insufficient infrastructure, and the scarcity of mental health professionals in these regions.By focusing on core strategies like stress management, problem-solving, behavioural activation, and strengthening social support, PM + addresses a wide range of common mental health issues, making it a versatile tool in diverse cultural settings.The implementation of PM + as a community-based intervention aligns with the need for accessible, costeffective, and culturally sensitive mental health solutions, promising to significantly enhance mental health care delivery and outcomes in LMICs. Conclusions This was the study investigating psycho-social factors and underlying mechanisms (i.e.mental health) related to sanitary disposal and collection of AHPs among mothers and caregivers in low-income and informal communities in Durban, South Africa.Our research findings confirmed the link between poor mental health and unsanitary AHPs disposal.This is especially relevant because poor mental health is common within South Africa.Addressing mental health problems within these communities is an essential step to providing sustainable waste management services.The impact of these interventions will lead to a cleaner environment and better health and mental health among community members.Our research findings are an important contribution to the long-term strategy of achieving the Sustainable Development Goals (SDGs) and contribute to the inclusion of vulnerable caregivers with poor health and mental health living in low-income communities, in humanitarian action related to environmental and climate change through evidencebased BC intervention implementation. Fig. 1 Fig. 1 Community locations in relation to central Durban (Map Data Source: Mappin WMS) Fig. 6 Fig. 6 Interaction effects between mental health and psycho-social factor 'confidence in performance in a hurry' on self-reported sanitary disposal of AHPs.Mental health values are the 16th, 50th, and 84th percentiles (SRQ-20 scale 0-20) Fig. 7 Fig. 7 Interaction effects between mental health and psycho-social factor 'remembering' on self-reported sanitary disposal of AHPs.Mental health values are the 16th, 50th, and 84th percentiles (SRQ-20 scale 0-20) The study research protocol was approved by the Ethics Committee of the ETH Zurich in Switzerland [EK-2022-N-155] and the University of KwaZulu-Natal Research Ethics Committee [REC-040414-040].All procedures applied in the research study were in accordance with the Declaration of Helsinki.All study participants were over the age of 18 and provided written informed consent. Table 2 Factual and action knowledge about health risks and prevention Table 4 Behaviour, habit, and intention for sanitary disposal of AHPs frequencies and average Table 5 Behavioural determinants of sanitary disposal of AHPs Table A1 in Annex) revealed that an intervention strategy should target the following psychosocial factors: perceived vulnerability, factual knowledge about relationship between health and waste, beliefs about prevention, safe & clean environment, affective beliefs (positive feelings), social norm (personal obligation), action knowledge, self-efficacy (confidence in performance and recovery), action control, and remembering (Table A1 in Annex).Behaviour change techniques
2024-03-30T05:21:25.919Z
2024-03-28T00:00:00.000
{ "year": 2024, "sha1": "9072ee2438c0137bf8b5c386fc7dfaf256501665", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9072ee2438c0137bf8b5c386fc7dfaf256501665", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
252911930
pes2o/s2orc
v3-fos-license
Quercetin inhibits angiotensin II-induced vascular smooth muscle cell proliferation and activation of JAK2/STAT3 pathway: A target based networking pharmacology approach The rapid growth of vascular smooth muscle cells (VSMCs) represents crucial pathological changes during the development of hypertensive vascular remodeling. Although quercetin exhibits significantly therapeutic effects on antihypertension, the systematic role of quercetin and its exact mode of action in relation to the VSMCs growth and its hypertension-related networking pharmacology is not well-documented. Therefore, the effect of quercetin was investigated using networking pharmacology followed by in vitro strategies to explore its efficacy against angiotensin II (Ang II)-induced cell proliferation. Putative genes of hypertension and quercetin were collected using database mining, and their correlation was investigated. Subsequently, a network of protein-protein interactions was constructed and gene ontology (GO) analysis was performed to identify the role of important genes (including CCND1) and key signaling pathways [including cell proliferation and Janus kinase 2/signal transducer and activator of transcription 3 (JAK2/STAT3) pathway]. We therefore further investigated the effects of quercetin in Ang II-stimulated VSMCs. This current research revealed that quercetin significantly reduced the cell confluency, cell number, and cell viability, as well as expression of proliferating cell nuclear antigen (PCNA) in Ang II-stimulated VSMCs. Mechanistic study by western blotting confirmed that quercetin treatment attenuated the activation of JAK2 and STAT3 by reducing its phosphorylation in Ang II stimulated VSMCs. Collectively, the current study revealed the inhibitory effects of quercetin on proliferation of Ang II stimulated VSMCs, by inhibiting the activation of JAK2/STAT3 signaling might be one of underlying mechanisms. The rapid growth of vascular smooth muscle cells (VSMCs) represents crucial pathological changes during the development of hypertensive vascular remodeling. Although quercetin exhibits significantly therapeutic effects on antihypertension, the systematic role of quercetin and its exact mode of action in relation to the VSMCs growth and its hypertension-related networking pharmacology is not well-documented. Therefore, the effect of quercetin was investigated using networking pharmacology followed by in vitro strategies to explore its efficacy against angiotensin II (Ang II)-induced cell proliferation. Putative genes of hypertension and quercetin were collected using database mining, and their correlation was investigated. Subsequently, a network of protein-protein interactions was constructed and gene ontology (GO) analysis was performed to identify the role of important genes (including CCND1) and key signaling pathways [including cell proliferation and Janus kinase 2/signal transducer and activator of transcription 3 (JAK2/STAT3) pathway]. We therefore further investigated the effects of quercetin in Ang II-stimulated VSMCs. This current research revealed that quercetin significantly reduced the cell confluency, cell number, and cell viability, as well as expression of proliferating cell nuclear antigen (PCNA) in Ang II-stimulated VSMCs. Mechanistic study by western blotting confirmed that quercetin treatment attenuated the activation of JAK2 and STAT3 by reducing its phosphorylation in Ang II stimulated VSMCs. Collectively, the current study revealed the inhibitory effects of quercetin on proliferation of Ang II stimulated VSMCs, Introduction High blood pressure is a key risk factor for cardiovascular disorders. It is one of the major causes of heart attack that affects more than one billion people globally (Vos et al., 2020). The steadily rise of blood pressure leads to renal disease, myocardial infarction, and heart related disorders (Ku et al., 2019;Lv and Zhang, 2019). Essential hypertension (EH) is a disease associated with increased systemic circulatory arterial blood pressure induced by an interplay between genetic and environmental factors (Natekar et al., 2014;Sanidas et al., 2017). EH affects 95% of hypertensive patients without knowing the causative factors, which mostly involve sympathetic nervous system hyperactivity, renal mechanisms, dysfunction of endothelial cells, nitric oxide pathway, enhanced left ventricular ejection force, and high blood pressure (Boutouyrie et al., 2021). According to the American Society for Hypertension, lowering the blood pressure should not only be the main objective but EH could be treated with medications to prevent the impending cardiovascular syndrome (Beaney et al., 2018;Mancusi et al., 2018). Antihypertensive drugs currently used in clinics to treat hypertension are divided into five classes. These include calcium channel blockers, angiotensin-converting enzyme (ACE) inhibitors, thiazide diuretics, angiotensin II (Ang II) receptor blockers and beta blockers Brouwers et al., 2021). However, some of antihypertensive drugs exist several limitation and side effects, such as, thiazide-like diuretic causes hyperuricemia and hyperlipidemia. ACE inhibitors has several side effects, e.g., cough, hypotension, fatigue, and azotemia (Parati et al., 2021;Khalil and Zeltser, 2022). EH remains an incurable illness, therefore new effective treatment strategies remain to be further explored (Tsang et al., 2020;Jama et al., 2022). Discovery of new antihypertensive natural compounds with less side effects is an urgent need for the treatment of hypertension. Thus, flavonoid molecules, like polyphenols, and flavanols are essential antioxidant sources in the nutrition (Anwar et al., 2019). Flavonoid consumption has been linked to lower the cardiovascular-associated mortality (Clark et al., 2015;Gee and Ahluwalia, 2016). Quercetin is a flavonoid with potential antioxidant, antiviral, anti-inflammatory, anticancer and vasodilating properties (Guillermo Gormaz et al., 2015;Griffiths et al., 2016). Quercetin decreased the oxidative stress, elevated nitric oxide biosynthesis, and minimized the dysfunction of arterial endothelium cells (Pereira et al., 2018). Quercetin inhibits adipogenesis through activation of AMP-activated protein kinase (AMPK) pathway and it also causes apoptosis in mature adipocytes by inhibiting extracellular signal-regulated kinase 1/2 (ERK1/2) and c-Jun N-terminal kinase (JNK) phosphorylation and activating the apoptosis pathway (Zhao et al., 2021). Quercetin plays anti-inflammatory role by supression of the nuclear factor kappa B (NF-κB) pathway (Comalada et al., 2005;Cheng et al., 2019). Thus, quercetin has the potential to be a leading molecule in drug discovery research (Patel et al., 2018). Some previous literature has revealed that quercetin attenuates the activation of signaling pathways, that further leads to reduce vascular injury and inflammation (Almatroodi et al., 2021). Quercetin has antihypertensive properties by improving endothelial dysfunction, lowering blood pressure via inhibition of potassium channel activity, and regulating the functions of several signaling pathways (Xue et al., 2018;Elbarbry et al., 2020). However, the underlying molecular mechanism of quercetin on antihypertension remains largely unknown. The networking pharmacology focuses on the interactions between drugs and their targets and point out the way for the discovery of novel drug molecules. This strategy is particularly suitable for the investigation of complicated illnesses like EH (Anighoro et al., 2014). Therefore, our results identified the direct and indirect genes (GJA1, PPARG, JAK2, and STAT3) associated with quercetin. These targets were enriched into several GO processes in the development of the circulatory system, regulation of cell proliferation, blood vessel development, and also enriched in numerous Kyoto Encyclopedia of Genes and Genomes database (KEGG) pathways including JAK/STAT pathway. Further, we found that Ang II stimulates the proliferation of VSMCs and quercetin treatment reduces the expression of the proliferative biomarker PCNA. Moreover, Ang II leads to the proliferation of VSMCs while quercetin treatment reduces proliferative capabilities of VSMCs and inhibits JAK2/ STAT3 activation. Materials and methods Networking pharmacology was employed to identify the quercetin mechanism for the treatment of hypertension. Traditional Chinese Medicine Systems Pharmacology Database (TCMSP), Binding database (BDB) and STITCH databases (version 5.0) were used to screen the targets of quercetin. The disease targets were screened through CooLGeN, MalaCards, Therapeutic Target Database (TTD) and Online Mendelian Inheritance in Man (OMIM) databases. The relationship between the "cross gene of quercetin and EH" and the phenotype of "essential hypertension" was studied with VarElect, and the common targets were acquired by mapping with the drug action targets. GeneMANIA database is used for research to generate cross targets of quercetin and EH and protein interaction network of corresponding genes, and key targets are screened according to the size of interaction relationship. The "quercetin-potential target" network diagram was drawn by using the Cytoscape 3.7.2 software. Using Database for Annotation, Visualization, and Integrated Discovery (DAVID) database, the GO function and KEGG pathway enrichment of the targets were analyzed. Finally, in vitro studies were conducted to explore the function of quercetin on anti-proliferation and the activation of JAK2/ STAT3 signaling pathway in Ang II stimulated VSMCs. Analysis of drug likeness As per the PubChem database, a wide range of physical and chemical characteristics of quercetin was determined depending on Lipinski's rule of five (RO5). Following factors were considered for further characterization of quercetin; molecular weight or mass (MW) number of H-bond donor and acceptor, number of rotatable bonds, TPSA (topological polar surface area), and octanol-water partition coefficient (XLogP3). Swiss ADMET as well as the ADMET descriptor in Discovery Studio 2016 were used to examine the absorption characteristics of quercetin, namely the blood brain barrier (BBB) and human gastrointestinal absorption (HGIA) plots for clear understanding of the ADMET characteristics of quercetin (Daina et al., 2017;Vora et al., 2019). Prediction of quercetin targets To identify the targets of quercetin, researchers used a number of strategies relying mostly on structural similarities principle, reversible docking technique, and data integration as well as assimilation process. PharmMapper Server, BDB, TCMSP, Swiss Target Prediction, and STITCH database were used to compile the potential targets (Ru et al., 2014;Daina and Zoete, 2016;Wang et al., 2017;Daina et al., 2019). The name of the compound quercetin was used to acquire target genes from STITCH and TCMSP databases. We acquired anticipated genes by SMILES string of quercetin in the Swiss Target Prediction and BDB database focused on the structure resemblance criteria. Furthermore, the structure of quercetin in SDF file format (PubChem CID: 5280343) were uploaded into PharmMapper for potential gene identification, followed by the normalization of acquired genes performed by using the UniProt Database. Lastly, these obtained genes were integrated, after exclusion of redundant targets (Supplementary Table S2). This section of the gene set is referred to as "quercetin 182 targets" within subsequent summary. Acquiring the essential hypertensionrelated targets EH-linked genes were obtained from the CooLGeN database Drug Bank database, MalaCards database, Therapeutic Target Database (TTD), and OMIM database, to determine the accuracy of disease-related genes. The scores of targets relatively greater than genes were extracted from the CooLGeN database. EH-associated targets were retrieved the OMIM and MalaCards databases. Redundant genes were removed from the datasets, and total 483 EH-associated targets were chosen for further study. This section of the gene array is referred to as "genes associated with EH." Supplementary Table S2 provides more detailed information for the targets. The correlation within these two classes of targets could be one of the potential genes of quercetin. The above category of integrating targets is referred to as "intersection genes of quercetin and essential hypertension". Quercetin targets including essential hypertension phenotypic associations assessment The VarElect database leads to more accurate identification of genes that are often directly or indirectly linked to a specific phenotype or disease (Stelzer et al., 2016). VarElect's ability to interpret and score gene lists dependent on phenotype keywords inputted is a critical feature. Elastic search computing is largely responsible for the VarElect analysis scores. The intensity of a term's occurrence in particular GeneCards is relative to the rate via its expression within these genes (opposite report intensity rectification) to evaluate its score. The association between "the intersection genes of quercetin as well as EH" and also the phenotype of "Essential hypertension" was investigated using the VarElect. Evaluation of protein-protein interactions In addition to creating PPI networks, the GeneMANIA platform can also find a set of targets similar to those entered. It is based on a variety of functionally relevant information, and compared differences between them, i.e., co-expression as well as Frontiers in Pharmacology frontiersin.org co-localization, using a variety of analogous information (Ogris et al., 2018). The GeneMANIA database was used in research to generate protein-protein interaction (PPI) network of intersecting targets of quercetin and EH along with the corresponding genes. We may obtain not only the association between the entered intersecting targets, but also many other target genes that are tightly associated to these targets, using GeneMANIA examination. The above unique set of genes is termed "anticipated targets of quercetin anti-EH" within current analysis. The Network Analyzer tool in Cytoscape 3.7.2 was used to calculate topological characteristics of the PPI network, such as degree, closeness centrality, betweenness centrality, and average shortest path length. Enrichment of gene ontology evaluation Metascape is an effective gene annotation and enriched analysis program . The predicted quercetin anti-EH targets were identified and categorized into various GO pathways. To obtain the biological processes as well as KEGG pathways relevant to the mechanism of EH, DAVID (version 6.8) tool was used. Implementing a target-pathway/function network The network was created using Cytoscape 3.7.2 for illustration of complicated networks. The nodes of the network identified potential quercetin targets for EH, biological processes, and signaling pathways discovered by enrichment analysis, and the edges represented associations between them. Vascular smooth muscle cells isolation Wistar rats were anesthetized by using isoflurane; abdominal aorta was instantaneously dissected, washed with ice-chilled 1.5 mM of CaCl 2 -HEPES-buffered salt solution (HBSS) for the removal of blood. After that, the aorta was cut longitudinally and the endothelial cells were removed with a cotton swab retrieved delicately. Cells were kept in 1.5 mM CaCl 2 -HBSS at 4°C for 30 min and were treated with Ca 2+ free-HBSS at room temperature for 20 min. Then, a digestion solution containing mixed enzymes (0.5 ml), collagenase (3.2 mg), BSA (2 mg), and papain (0.3 mg) was prepared, and the aorta was digested at 37°C for 20-30 min. After the removal of extra enzyme, Ca 2+ free-HBSS was used to wash the cells and kept in T25 flask with 5 ml of 10% FBS containing DMEM medium supplemented with 100 μg/ml streptomycin and 100 IU/ml penicillin. Lastly, by pipetting up and down for 50 to 60 times, gently dispersed VSMCs were kept in incubator containing 5% CO 2 . For each experiment fresh VSMCs were cultured and used for the corresponding experiments at third or fourth passage. All animal experiment protocols were approved by the Animal Ethics Committee of Fujian University of Traditional Chinese Medicine (No. FJTCM IACUC. 2021185). Immunofluorescence analysis Isolated primary VSMCs were seeded into glass bottom dishes at a density of 0.8 × 10 5 cells/well. Following 24 h of cultivation, cells were permeabilized with 0.25% Triton X-100 for 10 min, fixed with 4% paraformaldehyde for 15 min, and then blocked with blocking buffer (containing of 5% BSA, 85% PBS and 10% goat serum) for 1 h. Subsequently, cells were incubated with antibody against α-SMA (1:200) at 4°C overnight, followed by washing with PBS for thrice and incubated with secondary antibody anti-rabbit (1:400) in the dark at room temperature for 1 h. Then, cells were washed thrice with PBS and incubated with Hoechst stain (Thermo Fisher Scientific) for 10 min and observed under confocal microscope (PerkinElmer Inc., Waltham, MA, United States). Frontiers in Pharmacology frontiersin.org Cell counting Kit-8 assay for cell viability analysis The cell viability assay was determined using cell counting assay. The extracted VSMCs in suspension at 3 × 10 4 cells/mL density was seeded into 96 well plates. When VSMCs confluency was reached to 30%-40%, cells were placed in serum free media for starvation for 6-8 h and after that VSMCs were treated with various concentrations of quercetin (3.125, 6.25,12.5, 25, 50, and 100 μM) for 24 h or Ang II (0.01, 0.1, 1, and 10 μM) for 24 h. Besides, 0.1 μM of Ang II stimulated VSMC S were treated with quercetin (6.25, 12.5, and 25 μM) for 24 h. Following treatment, each well loaded with 10 µl of the CCK8 solution, which was then incubated at 37°C for 2 h. By using micro plate reader (Multiskan FC, Thermo Fisher Scientific) absorbance was measured at 450 nm. Western blotting analysis For the extraction of total proteins, Western cell lysis buffer (Beyotime Biotechnology, Shanghai, China) supplemented with 1 mM phenylmethylsulfonyl fluoride (PMSF) and other protease inhibitors including cocktail (MedChemExpress, Monmouth Junction, NJ, United States), and phosStop (Roche; Basel, Switzerland) was used. Briefly, cells were lysed with the lysis buffer for 20 min in ice and centrifuged at 14,000 g at 4°C for 20 min as described previously (Liu et al., 2021). Followed by the collection of supernatants, total concentration of protein was assessed by BCA protein kit assay and equal amount of each sample were subjected for separation through 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were transferred to PVDF membranes using wet transfer system. Further, 5% skimmed milk was used for the blockage of membranes for 2 h at room temperature and then primary antibodies; anti-p-JAK2, anti-JAK2, anti-p-STAT3, anti-STAT3, anti-PCNA or anti-GAPDH (dilution factor 1: 1000) were incubated at 4°C for overnight. After that, membranes were washed thrice with TBST buffer and membranes were incubated with anti-rabbit or mouse secondary antibody conjugated to horse radish peroxidase (dilution factor 1:5000) for 1 h at room temperature. The protein expression was detected by chemiluminescence kit and ImageJ Software. Statistical analyses Statistical Package for the Social Sciences (SPSS 25.0) software (Chicago, IL, United States) was used for statistical analysis. All data were provided as means ± SD the experiment. The Shapiro-Wilk test was employed to determine normality in experiments with three or more groups. When the data meet a normal distribution, one-way analysis of variance (ANOVA) was used, followed by the Bonferroni post-hoc test for pairwise comparisons. The non-parametric Kruskal-Wallis test was used to analyze the non-normally distributed data, pursued by the Dunnett's t-test for paired comparisons. The significant difference was considered with p < 0.05. MW, Molecular weight; TPSA, Topological polar surface area; BBB, blood brain barrier; GIT, gastrointestinal; XLogP3, water partition coefficient/computed octanol. FIGURE 1 The Circos plot shows intersecting genes. The outer arcs symbolize the identity of each gene. Each of the inside arcs represents a gene, and each gene is represented by its own spot on the arc. The dark orange color indicates genes found in multiple databases, whereas the light orange color signifies genes found only in that gene datasets. Purple lines connect identical gene that seems in specific genes databases. Blue lines connect the genes that identified as similar ontologies. Frontiers in Pharmacology frontiersin.org An evaluation of Quercetin's drug likeness For health impact assessment (HIA) and blood brain barrier (BBB), analysis of absorption, distribution, metabolism, excretion, and toxicity (ADMET) provides four levels of prediction between 95% and 99% of ellipsoids confidence. Quercetin was well absorbed via intestine and also had very high BBB level of penetration, as seen in Table 1. Supplementary Figure S1A illustrates the chemical structure, while Supplementary Figure S1B demonstrates the suitable physicochemical space for oral bioavailability (OB) as well as pharmacokinetic characteristics including TPSA 131.36Å, nine rotatable bonds, and so on. In addition, the BOILED-EGG analysis revealed that quercetin penetration into the bloodbrain [−1.098 (log BB)] and 77.207% absorption in the human gastrointestinal tract was illustrated in orange and colored zones respectively, as given in Supplementary Figure S1C plot. Quercetin targets and essential hypertension correlation analysis The intersection of targets from various databases was depicted by Circos plot (Figure 1). To identify gene overlaps, relatively similar enriched ontology term(s) and their functions or shared pathways were used to significantly improve gene overlaps as given in Supplementary Table S1. To study the genotype-phenotype correlations, a number of 54 overlapping genes of quercetin and EH were entered into the VarElect interactive platform. The results are presented in Table 2. The GJA1 gene on chromosome six encodes the gap junction alpha-1 protein (GJA1), widely recognized as connexin 43 (Cx43). Connexin (Cx) protein isoforms give a shape to gap junctions (GJs), which acts as a pathway for information exchange between cells and expressed in blood vessels in four different forms: Cx37, Cx40, Cx43, and Cx45. MAPK1 commonly recognized as p42MAPK and ERK2 is a mitogen-activated protein kinase enzyme that is denoted via MAPK1 gene in humans. MAP Kinases are involved in several cellular activities, like differentiation, proliferation, expression control, and growth, as well as acting as an integrating site for various biochemical signals. STAT3 is an essential intercellular signaling circuit and acts on several downstream genes such as CyclinD1, c-myc, and VEGFs. Activation of STAT3 depends on stimulation of the signal transduction protein JAK2 by various cytokines such as interleukins and growth factors (HGF and EGF) and results in phosphorylation of STAT3 at 705 tyrosine amino acid. STAT3, which is continuously activated, promotes tumor cell proliferation and survival, stimulates angiogenesis, and inhibits the immune response to malignancy. Network of protein-protein interactions The integrated PPI network was constructed by uploading 54 genes to identify their functional role. The source networks were classified into different categories (e.g., co-expression). The weight of each network along with the number was listed within each category. The weight of interaction correlations in the network is represented by the percentage in the results. According to the outcomes, 55.84% of the target interactions in the network had co-expression and 16.23% had physical interactions. In addition, there were interactions in the form of co-localization (11.27%), predicted (7.37%), pathways (3.92%), genetic interactions (3.31%), and shared protein domains (2.06%) (Figure 2A) Table S3 illustrate the estimated average closeness centrality, betweenness centrality, degree, and shortest path length of the nodes in the network. Analysis of the topological variables of the network revealed that genes from the STAT family (STAT1, STAT3) and cytokine-related genes (IL-6, IL-β, and IL-12α) were ranked highly throughout the network. Evaluation of gene ontology and pathways We used GO evaluation to further investigate the role of a total of 54 quercetin targets. In the category of EH pathogenesis, these targets were involved in the development of the circulatory system, regulation of endothelial and epithelial cell proliferation, muscle cell proliferation, blood vessel development, and cytokine response ( Figure 3A). It also had implication in the plasma membrane raft, RNA Pol-II transcription regulator complex, and IL-6 receptor complex ( Figure 3B). In addition, the category of molecular functions included signaling receptor binding, protein kinase binding, cytokines activity, and protein kinase regulator activity ( Figure 3C). The effect of quercetin over potential signaling pathways was investigated using KEGG pathway assessment. The analysis showed that "PI3K-AKT signaling pathway," "cytokine-cytokine receptor interactions," "JAK-STAT signaling pathway," "MAPK signaling pathway" and "pathways in cancer" were highly enriched ( Figure 3D). Frontiers in Pharmacology frontiersin.org Networking of target-functions We generated the function network/target-pathway after performing extensive network analysis on a number of representative biological processes, molecular function, and signal pathways. Multiple targets were systematically implicated in several biological processes, as given in Figure 4. For instance, STATs, and JAKs were involved in processes like "Wnt signaling pathway," "JAK/STAT signaling pathway" "protein kinase activity," "PI3K-AKT signaling pathway," "vascular smooth muscles contraction," and "protein homodimerization activity" processes. STAT1 and STAT3 were observed in a number of GO processes, including "circulatory system development," "cell proliferation regulation," "cytokine mediated signaling pathway," "positive regulation of phosphorus metabolic process," and "cytokine activity". Quercetin inhibits ang II-induced proliferation in vascular smooth muscle cells The primary VSMCs were isolated from abdominal aorta and identification was performed with α-SMA-antibody and 95% cells were detected positive by immunofluorescence staining (Supplementary Figure S2A). CCK8 assay showed that quercetin treatments (3.125-100 μM) did not affect the cell viability significantly (Supplementary Figure S2B), while Ang II (0.01 and 0.1 μM concentrations) significantly increased cell viability of primary VSMCs (Supplementary Figure S2C). CCK8 assay showed that 6.25, 12.5, and 25 µM concentrations of quercetin treatment attenuated Ang IIinduced elevated cell density significantly ( Figure 5A), cell number ( Figure 5B) and viability of VSMCs ( Figure 5C). Ang II stimulation increased the expression levels of PCNA, which Frontiers in Pharmacology frontiersin.org FIGURE 4 Networking of target-function evaluation. A functional component of interconnected targets is represented in a pie chart with different colors if either the target is actively engaged in GO processes as well as pathways. FIGURE 5 Quercetin's influences on Ang II-induced VSMCs. 0.1 µM of Ang II stimulated VSMCs were treated with quercetin (6.25, 12.5, and 25 µM) concentrations for 24 h, as shown in (A), descriptive images of the cultured VSMCs. (B) Relative number of VSMCs was determined by cell counting. Frontiers in Pharmacology frontiersin.org 10 was significantly reduced with quercetin treatment ( Figure 5D). Quercetin inhibited Ang II-induced JAK2/ STAT3 activation in vascular smooth muscle cells Western blot analysis ( Figure 6) indicated that stimulation of Ang II activated JAK2 ( Figure 6A) and STAT3 ( Figure 6B) via their phosphorylation in VSMCs, which were attenuated after quercetin (12.5 μM) treatment. Discussion The pathophysiology of EH is a highly complex mechanism. Multiple signaling-associated pathways, including the JAK2/ STAT3 pathway, play a significant function in the pathophysiology of EH (Mladěnka et al., 2018;Alanazi and Clark, 2019). Bioavailability of quercetin has been found to be involved in anti-hypertension, specially EH. Therefore, our current study aims to explore the potential underlying mechanism of quercetin on anti-hypertension using network pharmacology analysis. Our results identified 183 quercetin target genes and 384 genes associated with essential hypertension (EH), and 54 common genes of quercetin-EH, which was significantly enriched into several GO processes, i.e., muscle cell proliferation, vascular contraction, regulation of endothelial and epithelial cell proliferation, blood vessel development etc. Furthermore, analysis of the topological variables of the network revealed that genes from the STAT family (STAT1, STAT3), JAK2, and cytokine-related genes were highly ranked throughout the network. KEGG analysis showed the therapeutic effects of quercetin on EH involves PI3K-AKT signaling pathway, JAK-STAT signaling pathway and so on. In vitro experiments confirmed that quercetin treatments attenuated the cell density, cell number, and cell viability and reduced PCNA expression in Ang II stimulated VSMCs. Moreover, western blotting analysis showed that quercetin inhibited JAK2/STAT3 activation in Ang II-stimulated VSMCs. Network pharmacology has recently become an interdisciplinary field that encompasses computational biology, conventional pharmacology, structural biology, and multiomics strategies. The overlapping 54 genes of quercetin and EH were entered into the VarElect interactive platform to investigate genotype-phenotype correlations. Analysis of the topological variables of the network revealed that genes from the STAT family, cytokine-related and nitric oxide associated genes were highly ranked throughout the network. The family of NOS Frontiers in Pharmacology frontiersin.org enzymes are categorized into three NOS subgroups: NOS1, NOS2, and NOS3. All such subgroups are participating at a large scale in physiological processes within the central nervous system, immune, as well as cardiovascular system (Simons et al., 2016). The PPARY has been linked to the pathogenicity of a variety of diseases namely obesity, diabetes, high blood pressure, as well as tumors; it furthermore plays an essential part in the pathology of EH (Echeverría et al., 2016). JAK2 and STAT3 genes were also observed with VarElect implemented scores (9.32 and 7.92 respectively) among the key genes. Moreover, quercetin and EH overlapping targets revealed that the highly enriched signaling pathways were attributed to the regulation of endo/ epithelial cell proliferation and signaling pathways. STATs and JAKs were involved in several GO and signaling pathways, such as cancer pathways, JAK/STAT signaling pathway, AKT signaling pathway, vascular smooth muscle contraction, and so on. STAT1, STAT3, and JAK2 have been observed in various GO processes, including development of the circulatory system, regulation of cell proliferation, cytokinemediated signaling, and cytokine activity. All of these signaling pathways are involved in the development of EH. The GO enrichment and KEGG pathways analyses have been uniformed with the VarElect analysis's correlation between targets as well as phenotype. Based on the findings of the aforementioned studies, it is clear that quercetin has an important therapeutic and preventive role in EH caused by multiple targets (Jakaria et al., 2019). The JAK/STAT pathway responds to growth factors and cytokines by transducing signals from the cell surface to the nucleus in various cell types, such as smooth muscle proliferation, endothelial dysfunction, and inflammation (Lacolley et al., 2018;Liu et al., 2019). JAK is a receptor for several members of the STAT family, which includes STAT3. JAK2 and its associated receptors and STAT3 pathway are activated by Ang II (He et al., 2019). JAK2 activation has been associated with VSMCs proliferation, vascular endothelial apoptosis, and vascular cell contraction in vitro and ex vivo experiments (Zhang et al., 2018). While Ang II bind to their specific cell surface receptors and then recruit and activate receptor-associated JAKs (Gai et al., 2021). The activated JAKs phosphorylate tyrosine amino acids from specific receptors (like, AT1R) that identify SH2 domains of STATs (Bousoik and Montazeri Aliabadi, 2018). Once phosphorylated, STAT3 help to form the STAT homo/hetero dimers and transport them to the nucleus, where they adhere with specific DNA sequences and alter the expression of the genes involved (Bhaskaran et al., 2014;Harhous et al., 2019). The mechanistic representation is shown in Figure 7. In previous studies, inhibiting this route reduced neointima development FIGURE 7 Graphical presentation of anti-proliferative effect of quercetin along with inhibition of JAK/STAT activation in VSMCs. AngII binds to the angiotensin II type 1 receptor (AT1R), which mediates phosphorylation of JAK2, and phosphorylated JAK2 activates the inactivated form of STAT3 by phosphorylation of tyrosine residues. The phosphorylated complex activated by JAK2 and STAT3 assists STAT3 in dimerization. After dimerization of STAT3 protein, it is translocated to the nucleus, where it binds to the gene promoter, and the altered protein expression may lead to proliferation of VSMCs and consequently hypertension. Quercetin treatment inactivates the activated form of JAK2 and STAT3 by dephosphorylation as well as by blocking the dimerization of both proteins. Created with BioRender.com. Frontiers in Pharmacology frontiersin.org and cell proliferation after intima disruption in the specific carotid arteries (Milara et al., 2018). The importance of STAT3 in vascular dysfunction disorders have been hampered due to insufficient progress of genomic studies or specific pharmacological antagonists. This suggests that the JAK/STAT signaling pathway may play a significant role in the control of vascular function through Ang II (Jamilloux et al., 2019). Furthermore, for research community, flavonoids are the keen interests because of their diverse nature in pharmacological and biological reasons; such as anti-inflammatory, antioxidant, antiviral, and anticancer activities (Agrawal, 2011;David et al., 2016). In addition to above properties, quercetin is found in derivative forms in human plasma including glucuronide as well as αtocopherol and α/β-carotene. The significantly higher levels of αtocopherol and about 1% quercetin were found in isolated low-density lipoprotein (LDL) from human plasma after onion consumption (Moon et al., 2000). Based on Cu 2+ -induced oxidation of human LDL, quercetin derivatives including 3P-O-methyl quercetin showed a prolonged lag phase with a half scale of effect compared to that of aglycone (Manach et al., 1998). Additionally, two quercetin conjugates were studied in the plasma of quercetin-administrated rats, namely quercetin 3-O-β-D-glucuronide (Q3GA) and quercetin 4′-O-β-Dglucuronide (Q4′GA). Orally administered Q3GA had an antioxidant effect in isolated LDL from rat plasma as well as on Cu 2+ -induced oxidation of LDL in human plasma (Moon et al., 2001). Quercetin has been studied widely in a variety of animals for the treatment of several disorders via targeting multiple signaling pathways. Ang II induced activation of MAP kinase ERK1/2, JNK, and p38 in rat aortic smooth muscle (RASMC). In RASMC induced by Ang II, quercetin has an inhibitory effect on JNK, whereas ERK1/2 and p38 activation were not affected by quercetin treatment (Yoshizumi et al., 2001;Yoshizumi et al., 2002;Perez-Vizcaino et al., 2006). In several in vivo animal models, it was investigated that dimerization and translocation of p-STAT3 occur through phosphorylation of the amino acid Tyr-705 catalyzed by JAK1, JAK2, JAK3, and TYK2, whereas Ser-727 is phosphorylated by MAPK, which maximizes the transcriptional capacity of STAT3. Moreover, in 1-methyl-4-phenyl-1,2,3,6tetrahydropyridine (MPTP) models, STAT3 activation was found to be associated with JAK2 in astrocytes, whereas MPTP activated ERK1/2 phosphorylation, did not lead to phosphorylation of STAT3 at the Ser-727 residue (Skaper et al., 2001;Yanagisawa et al., 2001;Sriram et al., 2004). Moreover, the influences of Ang II on phosphorylation of STAT3 in primary cultured brain stem cells were identical to the effects of the peptides (Kandalam and Clark, 2010). The stimulation of JAK2/STAT3 pathway via Ang II has been shown in experimental studies, both in vitro and in vivo, and therefore play a vital role in the development of Ang IIdependent hypertension (Satou and Gonzalez-Villalobos, 2012). In VSMCs, Ang II induces JAK2, which activates RhoA guanine nucleotide exchange factor I, Arhgef1, which eventually activates RhoA signaling and causes hypertension (Banes-Berceli et al., 2011). In contrast, suppression of Arhgef1 reduces Ang II-induced hypertension (Terada and Yayama, 2021). Moreover, therapeutic suppression of JAK2 has been shown to reduce the progression of hypertension in Ang II-infused rats (Guilluy et al., 2010;Kirabo et al., 2011). The same effect was observed in JAK2 knockdown cells (Kirabo et al., 2011). In vivo results also shown that modulation of JAK2 and AT1Rs reduces the progression of hypertension and proteinuria in diabetic renal dysfunction (Forrester et al., 2018;El-Arif et al., 2022). In myocytes, Ang II triggered biphasic STAT3 phosphorylation regulated by TLR4 (Han et al., 2018). Our current study revealed that quercetin inhibits the proliferation and JAK2/ STAT3 pathway activation in Ang II stimulated VSMCs. Comparative to the previous finding in terms of association of Ang II with JAK/STAT signaling, our results suggest that quercetin inhibits the proliferation and plays a significant role in inhibiting the JAK2/STAT3 activation via their phosphorylation in VSMCs induced by Ang II. Conclusion In this current study, networking pharmacology approach revealed that quercetin exerts its effect through multiple signaling pathways against EH targets. Moreover, in vitro studies showed that quercetin inhibits the proliferation of VSMCs targeting JAK2/STAT3 signaling induced by Ang II. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
2022-10-17T14:01:19.958Z
2022-10-17T00:00:00.000
{ "year": 2022, "sha1": "ed3426107c49ccbce6a294e1d79f197f3e36f783", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "ed3426107c49ccbce6a294e1d79f197f3e36f783", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119492250
pes2o/s2orc
v3-fos-license
Recognition of M-type stars in the unclassified spectra of LAMOST DR5 using a hash learning method Our study aims to recognize M-type stars which are classified as"UNKNOWN"due to bad quality in Large sky Area Multi-Object fibre Spectroscopic Telescope (LAMOST) DR5 V1. A binary nonlinear hashing algorithm based on Multi-Layer Pseudo Inverse Learning (ML-PIL) is proposed to effectively learn spectral features for the M-type star detection, which can overcome the bad fitting problem of template matching, particularly for low S/N spectra. The key steps and the performance of the search scheme are presented. A positive dataset is obtained by clustering the existing M-type spectra to train the ML-PIL networks. By employing this new method, we find 11,410 M-type spectra out of 642,178"UNKNOWN"spectra, and provide a supplemental catalogue. Both the supplemental objects and released M-type stars in DR5 V1 are composed a whole M type sample, which will be released in the official DR5 to the public in June 2019, All the M-type stars in the dataset are classified to giants and dwarfs by two suggested separators: 1) color diagram of H versus J~K from 2MASS; 2) line indices CaOH versus CaH1, and the separation is validated with HRD derived from Gaia DR2. The magnetic activities and kinematics of M dwarfs are also provided with the EW of H_alpha emission line and the astrometric data from Gaia DR2 respectively. INTRODUCTION M-type stars are becoming dominant targets for research on the structural evolution and kinematics of the local Milky Way. M giants and M dwarfs have similar spectral features, both with strong molecular characteristics. M giants are redgiant-branch (RGB) stars with low surface temperature and high luminosity in the late-phase of stellar evolution. Their luminous nature allows us to use these stars as good tracers to study the outer Galactic halo and distant substructures (Zhong et al. 2015). M dwarfs, main-sequence stars with M * ∼ 0.075-0.6M (Chabrier et al. 2000), are the dominant stellar constituent in the solar neighborhood and probably the Galaxy (Henry et al. 1994(Henry et al. , 2006Bochanski et al. 2010;Salpeter & Hoffman 1995;Chabrier 2003). They are very useful sources for studying and probing the lower end of the Hertzsprung-Russell diagram (HRD), even down to the E-mail: lal@nao.cas.cn hydrogen-burning limit. More and more M dwarf samples enable us to deep the understanding of their fundamental properties just like what we knew about the massive stars. For example, some of the key astrophysical interrelated topics have been exploring, including the precise relationship between mass and radius (Feiden & Chaboyer 2014;Jackson & Jeffries 2014;Han et al. 2017), the mass-luminosity relation (Henry & McCarthy 1993;Delfosse et al. 2000;Torres et al. 2010;Benedict et al. 2016), rotation and angular momentum (Stassun et al. 2011;Houdebine et al. 2017), magnetic activity (Reiners 2012;Feiden & Chaboyer 2014;Yang et al. 2017), complex atmospheric parameters and dust settling in their atmospheres, and age dispersion within populations (Veyette et al. 2017;Bayo et al. 2017). The largest spectroscopic data bases of M-type stars were from multi-object spectroscopic surveys such as the Sloan Extension for Galactic Understanding and Evolution (SEGUE) (Yanny et al. 2009) and the LAMOST Experiment for Galactic Understanding and Exploration (LEGUE) (Newberg et al. 2012). Besides the formal data releases of 2 Y.-X. Guo et al. the surveys, specific M dwarf catalogs were also presented by astronomers. An M dwarf catalogue of SDSS including more than 70,000 stars , and two M dwarf catalogues of LAMOST were published (Yi et al. 2014;Guo et al. 2015). Considering the intrinsic low brightness of M dwarfs and the large distance of M giants, however, many low S/N M-type spectra has not been recognized in these surveys. LAMOST DR5 V1 have released more than 9 million spectra including 640,000 "UNKNOWN" spectra (not classified by the LAMOST pipeline (Luo et al. 2015)). Some of these "UNKNOWN" spectra, mostly with low S/N, are valuable for astronomical research. For example, Huo et al. (2017) identified eight quasars from the LAMOST DR3 "UN-KNOWN" spectra in the area of the Galactic anti-center of 150 • l 210 • and |b| 30 • . By applying a machine learning method, Li et al. (2018) recognized a total of 149 carbon stars that were misclassified as "UNKNOWN" in LAMOST DR4. Ren et al. (2018) published a catalog of White Dwarf Main Sequence binaries based on DR5 V1 dataset, several of which were classified as "UNKNOWN" by the LAMOST pipeline. The classification method of LAMOST pipeline is based on template matching, in which each observed spectrum is cross-matched with a set of templates to calculate chi-square values. The template which corresponds the smallest value suggests the class that the object belongs to. Sometimes, the chi-square value of the best-fitted template for a low S/N spectrum has too low confidence, which makes the pipeline refuse to judge and labels its class as "Unknown". Other than template matching, the Query based machine learning methods are specifically 'similarity search' algorithm which can retrieve objects in a database with a specific pattern ignoring irrelevant noise. The Approximate Nearest Neighbor Search (ANNS) is a commonly used Query method, and the hash learning technique is one of the most widely used ANNS algorithm (Wang et al. 2016a). The basic idea of the hashing-based search techniques is to learn the relationships which map the high-dimensional raw data to the compact binary codes (series of digits consisting of 0 and 1), and then to retrieve the nearest neighbors of the query pattern using the Hamming distance (frequently used for representing the distance between two binary code) in the binary code space. Consequently, searching in the hash code space is extremely efficient both in time and memory consuming. A schematic diagram of a hash learning search is shown in Fig.1. Many hash methods, including Locality Sensitive Hashing (LSH) (Andoni & Indyk 2006), Spectral Hashing (specH) (Weiss et al. 2008), Iterative Quantization (ITQ) (Gong & Lazebnik 2011), Spherical Hashing (SpH) (Heo et al. 2012) etc., have been intensively studied and widely used in many different fields, and their advantages and weaknesses have also been deeply investigated (Bondugula 2013;Wang et al. 2016b). In this paper, we employ Semantic Hashing (SH) (Salakhutdinov & Hinton 2009) to construct a deep hash learning model to search for M-type spectral pattern through learning hidden binary features and reconstructing the input data. However, to train such a deep generative model often requires multiple iterations, which suggests that it is not only extremely time-consuming while dealing with large amount of data but also needs to set parameters re- peatedly depends on experience rather than theoretical basis. The appearance of pseudo inverse learning (PIL) ) shed a new light on the deep learning technique because PIL is actually an supervised learning algorithm for training a single hidden layer feedforward neural network which do not need to tune the hidden layer parameters once the number of hidden layer nodes is determined. The weight and bias vectors between the input layer and the hidden layer are randomly generated, and these are independent of the training samples and the specific applications (Pal et al. 2015). In this study we build a multilayer PIL (ML-PIL) to fulfill the hash learning process, so as to search M-type stars in the "UNKNOWN" spectra of LAMOST DR5 V1. There are methods to separate giants from dwarfs for M-type stars based on colors, spectral indices and proper motions etc. Bessell & Brett (1988) proposed a color discrimination method. In their study, M giants and M dwarfs are distributed around different loci in the [J − H, H − K] color-color diagram, which are mainly caused by the differences in the opacity of molecular bands of H 2 O (Bessell et al. 1998). Because M giants have relatively larger distances and smaller proper motions, a reduced proper motion method was used to separate M giants from M dwarfs (Lépine & Gaidos 2011). By comparing the spectra of M dwarfs and M giants, several gravity-sensitive molecular and atomic spectral indices were selected to determine the luminosity class (Mann et al. 2012). Recently, a new photometric method combining 2MASS and WISE photometry was used to recognize M giant spectra in LAMOST dataset (Zhong et al. 2015). The strength ratio of TiO band to CaH band varies with surface gravity. Reid et al. (1995) defined the TiO5, CaH2, and CaH3 spectral indices, and Zhong et al. (2015) used the aforementioned indices to distinguish M giants from M dwarfs. In addition, other methods, such as Mg2 versus g−r was used for the separation (Covey et al. 2008). We compare different giant/dwarf separation schemes and suggest two additional separation indicators with more correctness. The paper is organized as follows: section 2 briefly introduce the spectral data used in the paper along with the spectra preprocessing; Section 3 presents the ML-PIL-based hash learning scheme, the construction of positive and negative samples, the model training and the performance evaluation of the method on real spectral data, and then the application of ML-PIL in searching for M-type stars in LAM-OST DR5 V1 "UNKNOWN" dataset; Section 4 compares different giant/dwarf separation schemes for M-type stars and suggests two useful indicators following by investigation of the activity and kinematics of the whole M dwarf sample in DR5; The final section summarizes the work of this paper and envisions potential future work. DATA AND PREPROCESSING LAMOST is a 4-m reflecting Schmidt telescope with a large field of view (FoV) of 5 degrees in diameter. It has 4,000 fibers mounted on its focal plane and 16 spectrographs with 32 CCD cameras, so that it can simultaneously observe up to 4,000 objects (Cui et al. 2012). The raw CCD data are reduced and analyzed by the LAMOST data pipelines, which consists of a 2D pipeline and a 1D pipeline. The primary functions of the 2D pipeline include bias calibration, flat field correction, spectral extraction, sky subtraction, wavelength calibration, flux calibration, sub-exposures combination, etc. The calibrated spectra from the 2D pipeline are then fed to the 1D pipeline which performs spectral classification and parameter determination based on template matching and chi-square criteria (Luo et al. 2015). Until July 2017, LAMOST has completed its five-year regular survey. The LAMOST DR5 V1 includes 9,017,844 spectra of stars, galaxies, quasars, and unrecognized objects. These spectra cover the wavelength range from 3690 to 9100Å with a resolution of R ∼ 1800 at the wavelength of 5500Å. "UNKNOWN" data from LAMOST DR5 Among the 9 million spectra in LAMOST DR5 V1, 642,178 unrecognized spectra were labeled as "UNKNOWN". During the classification process of 1D pipeline, a spectrum is classified as "UNKNOWN" if the confidence of the classification result is lower than a given threshold value, e.g., the chi-square value of the best-match result is greater than a certain value, or the target spectrum has almost equal similarities to multiple dissimilar templates. These problems occur in multi-template matching process, which we will refer to as the multi-template matching problems, mostly owing to the low spectral S/N (see top panel of Fig. 2). The lower panel of Fig. 2 gives part of "UNKNOWN" objects in color-color diagram. The M-type star candidates should be located in the upper right region of this panel. Due to the intrinsic low luminosity of late-type M dwarfs, most of them have low S/N spectra, which are expected to be classified as "UNKNOWN" objects. To efficiently recognize M-type spectra from the 642,178 "UNKNOWN" spectra by using a more noise-insensitive approach than the multitemplate matching problem of 1D pipeline, we choose an approximate proximity search method based on deep learning model, which can combine the low-level features layer by layer to obtain more abstract high-level feature expres- sion, and then discover the inherent and essential feature representation of complex data. Data preprocessing LAMOST spectra cover the wavelength range from 3690 to 9100Å with a resolution R ∼ 1800 at the wavelength of 5500Å. First, each spectrum is rebinned onto the samewavelength space with a fixed step length. Then, we normalize the spectra by re-scaling the fluxes to eliminate scale differences among the raw spectra. For a given spectral set . . , x i p ) T ∈ R p represents a spectrum, in which x is the flux at a given wavelength. The normalization is performed as where MAX and MIN indicate the maximum and minimum values after the normalization, use 1 and 0 for simplicity. The max{·} and min{·} return the maximum and minimum element in a given vector, respectively. For each spectrum, we obtain the normalized one denoted asx ML-PIL BASED HASHING SCHEME AND APPLICATION ML-PIL based hashing scheme can be divided into two stages: the deep hashing learning model training stage and the ANNS query stage. In the model training phase, we construct a deep hash learning model to project all the target data into a feature space, then we encode the final feature representations of the last hidden layer's outputs into "fingerprints". For a welltrained ML-PIL-based hash network, we can get the corresponding "fingerprints" using the query sample of Section 3.3 as input data. Similarly, we can get the "fingerprints" of the "UNKNOWN" spectra in DR5. We organize the description of this model construction into several subsections including framework of ML-PIL, hashing encoding scheme, positive sample through clustering, negative sample selection, model training and performance evaluation etc., from Section 3.1 to 3.5. In the second ANNS query stage, for any given "query", we search for the similar spectra from the "UNKNOWN" data by calculating their similarities. The similarity between the query sample and each "UNKNOWN" spectrum is calculated by measuring the Hamming distance in the feature code space. The less distance of a sample to a coded query spectrum in the hash space suggests it is more similar to that query spectrum. The Section 3.6 illustrates the aforementioned query stage. ML-PIL framework ML-PIL is a hierarchical network structure based on pseudo inverse learning, and it is stacked with several single hidden layer neural networks. For a given single hidden layer neural network in Fig. 3, we can get a single layer auto-encoder by training such that the output is approximately equal to input. By stacking several aforementioned single layer autoencoders, we can get the multilayer autoencoder. Once a multilayer autoencoder is trained, the binary hash code of any sample is obtained from the deepest hidden layer. However, these complex models require iterative parameter adjustments and hence are computationally expensive. To overcome the computational complexity of multilayer autoencoder, a PIL algorithm is introduced exploiting the advantages of its random orthogonal feature mapping to speed up learning. PIL is actually a supervised learning algorithm for training a single hidden layer feed-forward neural network(SLFN). The basic idea is to find a set of orthogonal vector bases using the nonlinear activation function to make the output vectors of the hidden layer neurons orthogonal. Then the weights of the output connection of the network are approximately solved by calculating the pseudo inverse. The PIL algorithm uses only basic matrix operations to calculate the analytical solution of the optimization objective (Wang . It does not need iterative optimization and parameter adjustment. Therefore, its efficiency is much higher than that of the gradient descent based algorithms. Here, we give a detailed introduction for the PIL algorithm. In Fig. 3 where g i is the activation function, W i = [a i1 , . . . , a iL ] T is the input weight vector, β = [β 1 , . . . , β L ] T is the output weight matrix between the hidden node and the output node, b i is the bias of the input matrix. The aim is to find the optimal weight matrix to minimize the loss function This problem can be expressed as where H is the hidden layer node output. This nonlinear mapping H is defined by The objective of optimization can be converted to minimize the loss function In the PIL algorithm, once the bias and the input weight of hidden layer is determined, the output matrix of hidden layer is uniquely determined. The training of the single hidden layer neural network can be transformed into solving a linear system. We can get the output weight β from where H is the Moore-Penrose generalized inverse of matrix H. PIL is modified as follows to get PIL AutoEncoder (PIL AE) so as to perform unsupervised feature representation: input data are used as output data T = X. ML-PIL is derived from multiple stacks of PIL AEs. Each PIL AE is trained separately. The output of the hidden layer of the previous PIL-AE is connected to the input of the latter PIL-AE. The layer by layer trained PIL-AEs are then stacked into a ML-PIL (see Fig. 4). The output of the last hidden layer is used to do hash mapping. Proposed hashing scheme As described in the previous subsection, the feature expression can be learned from the last hidden layer of ML-PIL. These features can be projected into the hash code space through hash mapping to obtain the "fingerprint". The "fingerprint" is a binary number consisting of a series of 0 or 1. A perfect hash mapping should have the following properties simultaneously: (1) Similar samples should be mapped to similar hash codes (usually called similarity-preserving or coding consistency). (2) The hash codes should be "balanced" (usually called coding balance), which means that, for each bit in the code, half of the samples are mapped to 1 and the other half are mapped to 0 (or −1). (3) All bits should be independent of each other. Fig. 5 illustrates the procedure of learning and hash coding for features. We define a threshold with which the features H are made binary. To be specific, we choose the median value of each feature dimension as the threshold. Then the feature values that are greater than the threshold are mapped to 1; otherwise, they are mapped to 0. By doing so, the learned binary codes are guaranteed to be "balanced". Positive samples The size of training set for any Machine Learning (ML) algorithm depends on the complexity of the algorithm, while for PIL-ML based hashing scheme, thousands of positive samples are demanded to represent M-type spectra which embrace all kinds of subtypes, luminosity classes and various S/Ns especially low S/N ones since the "UNKNOWN" data have universally low S/N as shown in the top panel of Fig. 2. Therefore, we cluster the released M-type stars in LAMOST DR5 V1 to select various positive samples from each cluster. Before clustering, all the M-type spectra are shifted to rest frames, then two machine learning methods are adopted which are Balanced Iterative Reducing and Clustering Using Hierarchies (BIRCH) (Zhang 1999) and K-means (Arthur & Vassilvitskii 2007). The BIRCH algorithm builds a tree called the Characteristic Feature Tree (CFT) for the given data. It incrementally clusters the data points, uses a fraction of the dataset memory, and updates the clustering decisions when new data comes in. The K-Means algorithm clusters data by trying to separate samples in n groups of equal variance, minimizing a criterion known as the inertia or within-cluster sum-of-squares. It has been widely used in many different fields (Almeida & Prieto 2013;Wei et al. 2014). First, the BIRCH algorithm is adopted to cluster the 529,629 M dwarf spectra in LAMOST DR5 V1 into 50 groups. The Principal Component Analysis (Jolliffe 2002) is applied to reduce the dimensions of the spectra. Second, for each group, 20 sub-clusters are obtained using K-Means. While for M giants, 80 clusters are obtained only using K-Means algorithm. Thus, we initially obtain 1,080 average spectra for all cluster centers. After manually inspection, 23 defective spectra with flux gaps (see Fig. 6) are abandoned, and then 6,699 spectra are randomly selected in reserved 1057 clusters. Supplementing 38 template spectra used in the LAMOST pipeline, 6,737 M-type positive spectra are ultimately assembled. As shown in Table 1, the positive samples of the 6,737 M-type spectra include 10 M-type subclasses, luminosity classes (dwarf or giant), and wide range of S/Ns. We present four typical positive samples shown in Fig. 7, three high S/N spectra and one low S/N spectrum including an early giant, a late giant, and two early dwarfs. Negative samples The 10,000 negative samples are randomly selected and visually confirmed as non-M-type from "UNKNOWN" spectra in LAMOST DR5 V1. Another 5000 non-M-type spectra are randomly selected from the data release with known class and shifted to rest frames. Then, we totally obtain 15,000 negative samples. Model training and performance evaluation The aforementioned total 21, 737 positive and negative samples are used to train and validate the designed ML-PIL model. The ML-PIL model comprises of three hidden layers, since Wang et al. (2017) demonstrated that more hidden layers do not help much for improving performance. The Sigmoid activation function is selected for each hidden layer. The length of hash code ("fingerprint") which is derived from the feature learned through ML-PIL would affect the performance of the ANNS, so that an appropriate code length should be decided via the performance evaluation. We use "Accuracy" and "Recall" to evaluate the performance of ML-PIL hashing searching. The "Accuracy" is defined as where TP denotes the number of the true positive samples in the result of query. While the FP is the number of the false positive samples. The "Recall" is defined as where FN denotes the number of the false negative samples. We plot the Accuracy-Recall curves (Fig. 8) to evaluate the performance of the model setting the code length to be 32, 64, 128, and 256 bits. Each value of "Accuracy" and "Recall" in the Fig. 8 is the average value of ten thousands ANNS results. We perform ten thousand times of ANNSs to guarantee each of 21,737 samples can be selected. Therefore, a unbiased statistical result is obtained. The training set and validation set of each ANNS is randomly selected. In Fig. 8, the larger area under the curve suggests the better performance intuitively, that is, both the "Accuracy" and "Recall" achieve a higher level. It can be observed that as the code length increases, the performance of the model is improved. But to a certain extent, the variation of performance is less sensitive to the code length. Therefore, we ultimately choose 256 as the code length. Finally, we obtain an effective ML-PIL hash learning model which had both high "Accuracy" and "Recall". Besides, the time consumed for training the ML-PIL framework is 11.76s, which is much less than that of the traditional gradient descent based deep learning networks. Application of ML-PIL based hash learning to recognize M-type spectra We apply the ML-PIL hash learning method to search for M-type spectra from "UNKNOWN" data in LAMOST DR5 V1 with the 6,737 query (positive) sample spectra which are described in subsection 3.3. We firstly derive the hash codes for the query samples and all "UNKNOWN" spectra through the ML-PIL hash model. Then, for each "query" we calculate similarity between the query sample and each "UN-KNOWN" spectrum using the Hamming distance between their hash codes. The smaller distance the better similarity. Fig. 9 shows one example of the top 10 search results for a late-type M spectrum. On the other hand, Fig. 10 shows another example of the increasing dissimilarity with the Hamming distance for an early-type M spectrum. Those similarities ranks top 10% are kept for each of 6,737 searches. We manually inspect the union of these 6,737 subset and recognized 11,410 M-type spectra (11,156 objects) including 10,242 dwarf and 1,168 spectra from the 642,178 "UNKNOWN" spectra in LAMOST DR5 V1. We make a supplemental catalog and re-archive all these 11,410 spectra from "UNKNOWN" category into M-type star in LAMOST DR5 V2, which will be officially released in June 2019. Like former LAMOST data releases, we measure same parameters for these spectra in the catalog including indices of nine molecular bands, equivalent width of Hα, magnetic activ- ity, and metal-sensitive parameter ζ (Yi et al. 2014;Guo et al. 2015) etc. In addition, the catalog also provides spectral subtype for these spectra determined using an improved Hammer package. The improvement to the original Hammer ) was made by Yi et al. (2014) who incorporated three new indices to increase the classification correctness. In the catalog, each object also has radial velocity which is measured through cross-matching with dwarf templates, and the giant/dwarf separation which is determined using the suggested methods described in Section 4.1, respectively. This supplemental catalog can be downloaded from the web site http://paperdata.china-vo.org/Guoyx/ 2018/M_etable.txt. Table 2 shows the first five rows of the catalog. Fig. 11 and Fig. 12 show the distributions of the spectral subtypes and the S/Ns of the 11,410 spectra, respectively. The number distributions of the M-type spectra in LAMOST DR5 V1 and the supplemental spectra in mag r space are compared and shown in Fig. 13. These supplemental M-type spectra not only have fainter luminosity, but also have higher proportion of the late-type than the M-type spectra in LAMOST DR5 V1, and the comparison are shown in Fig. 14. The total number of late M-type spectra (later than M5) recognized through ML-PIL based hash learning is 569. Adding 11,410 M-type spectra from "UNKNOWN" data in LAMOST DR5 V1 to the M dataset of LAMOST DR5 V1 which originally has 58,3728 M-type spectra, we now posses a larger M star catalog for DR5 (defined as "ALL M" hereafter) to study the giant/dwarf separation and the magnetic activity in the discussion section. Luminosity class indicators We use "ALL M" objects to check both the spectroscopic and the photometric criteria for separation of M giant and dwarf proposed by Zhong et al. (2015) (Zhong2015 for short), which are the CaH2+CaH3 versus TiO5 line index diagram and the J − K versus W1 − W2 color diagram respectively, and suggest better spectroscopic and photometric separator for M giant/dwarf, which are the CaOH versus CaH1 line index diagram (middle panel of Fig. 15) and the H versus J − K color diagram (top panel of Fig. 16). We use HRD of Gaia DR2 to verify the suggested separation approach. Accurate parallaxes and proper motions for the vast majority of "ALL M" are obtained through cross-matching within 5 arcsec to Gaia DR2 (Gaia Collaboration et al. 2018a) which have come available in April 2018. We build the Gaia HRDs by simply estimating the absolute Gaia magnitude in the G band for individual star using M G = G + 5 + 5 log 10 ( /1000), with the parallax in miliarcseconds (plus the extinction) (Gaia Collaboration et al. 2018b). This is valid when the relative uncertainty on the parallax is <∼ 20% (Luri et al. 2018). First, we choose the early M-type spectra to analyze the validation of the luminosity discrimination in spectral features. As shown in the Gaia HRD, the top panel of Fig. 15, the M giants are in red color and locate in the upper branch while the M dwarfs are clearly separated in black color and locate in the lower branch. The middle panel of Fig. 15, the CaOH versus CaH1 diagram, shows that the same giants population with the upper panel in red color lay in the upper branch in this diagram. However, Zhong2015 was weaker to discriminate M giants and dwarfs for early M type spectra. The bottom panel of Fig. 15, the CaH2+CaH3 versus TiO5 diagram, shows some giants overlap with dwarfs in the lower branch where is the location of dwarfs. This overlap means that the criterion in Zhong2015 will lead to a small portion of M giants misclassified as dwarfs. Then, we examine the effectiveness of the H versus J −K criterion using the total "ALL M", and we can see different loci of dwarfs and giants in the bottom of Gaia HRD. As shown in top and middle panel of Fig. 16, both the suggested criteria in this paper and Zhong2015 can separate giants and dwarfs. In these two panel, dwarfs are shown in black color, while giants are in red or blue represent classified by the criteria in this paper or by Zhong2015 respectively. Comparing this two groups of giants from different separator in the the Gaia HRD shown in the bottom panel of Fig. 16, part of giant candidates (∼12%) from the criterion in Zhong2015, J − K versus W1 − W2, should actually be dwarfs lying in the main-sequence strip. It is clear that H versus J − K can easier eliminate possible dwarf contaminations from giants than the method given in Zhong2015. Using both the spectral features and the 2MASS photometry, we determine each M-type spectra as giant or dwarf in the supplemental catalog. From the result, we conclude that even lacking of 2MASS infrared data we still can efficient to separate M giants from M dwarfs based on spectral feature. Magnetic activity and kinematics Magnetic fields affect the chromospheric activity of M dwarfs, and Hα emission can be an indicator of chromospheric activity. We investigate the magnetic activity of M dwarfs by measuring the equivalent widths (EWs) of Hα. Once the S/N of the continuum around Hα of a M dwarf is greater than 3, the M dwarf spectra is then to be checked the value of EW of the Hα greater or less than 1 to determine it is active or inactive. (Guo et al. 2015). If the S/N around Hα of a M dwarf less than 3, the activity of the M dwarf will not be measured. The upper panel of Fig.17 shows that the EWs of Hα increase with the subtype becoming later, while the lower panel shows that mean fraction of active stars increases or inactive stars decreases with spectral sub- type becoming later. This implies that later M dwarfs show stronger and higher fraction of magnetic activity. We also investigate the velocities and velocity dispersions for both active and inactive M dwarfs. Combining radial velocities, distances and proper motions from Gaia, the heliocentric space motions (U, V, W) are computed according to the method of (Johnson et al. 1987). The 3D velocities are computed in a right-handed coordinate system, with positive U velocity toward the Galactic center, positive V velocity in the direction of Galactic rotation and positive W velocity toward the north Galactic pole. The velocities are corrected for solar motion (10, 5, 7 km/s −1 ) (Dehnen & Binney 1998) with respect to the local standard of rest. These kinematical parameters are also provided in the supplemental catalog. The M dwarfs are binned in 100 pc increments of absolute vertical distance from the Galactic plane. The UVW velocity mean values and velocity dispersions as a function of absolute vertical distance for active and inactive populations are shown in Fig.18. From the figure, we can see that the active M dwarfs are systematically low in velocity dispersion in the W direction. While the the velocity mean values of the active M dwarfs are high in U and V directions. The two populations separated apparently, suggesting that the active M dwarfs should be born in an older kinematical population, which is consistent with Hawley et al. (2011). The UVW velocity mean values decline with increasing absolute vertical distance, whereas the UVW velocity dispersions rise, for both the active and inactive populations. This result agrees well with the trend for thin disks shown in Bochanski et al. (2007). Furthermore, although we find that the the strength of Hα emission line varies in multiple observations for some M dwarfs, we don't have enough data to draw any conclusion, which needs analysis of other physical characteristics, such as flare, rotation, and their intrinsic relationships by using time domain photometric and spectroscopic observations. SUMMARY A binary nonlinear hashing algorithm based on ML-PIL is proposed to effectively learn spectral features of M-type stars, in order to search for missing M type stars due to failures of multi-template matching particularly for low signalto-noise ratio spectra. We construct a specific ML-PIL model for the learning and searching, and build a positive sample through clustering both high and low S/N known M-type spectra. Evaluating the performance of the model and effectively applying to 642,178 "UNKNOWN" spectra in LAM-OST DR5 V1, we finally recognize 11,410 M-type spectra and make a catalog to supplement to the M-type star catalog of LAMOST DR5 V1. For the recognized spectra, some useful values are calculated including indices of molecular bands, magnetic activities and metal-sensitive parameters ζ. Adding the M-type spectra recognized through ML-PIL to the original released M-type stars in DR5 V1, we obtain a complete catalog of M-type stars in LAMOST DR5 which will be officially released in June 2019. Through crossmatching, the common objects with Gaia DR2 are used to study the giant/dwarf separators based on the 2MASS color indices and and LAMOST spectral line indices. We then propose two giant/dwarf separators, and verify them with the HRD from Gaia DR2, by which we label the objects as dwarfs or giants and calculate kinematics for the M dwarfs. According to the good performance of ML-PIL based hash learning algorithm and their successful application in Mtype spectra search, we believe it is able to effectively search for specific spectra, especially low S/N data such as LAM-OST "UNKNOWN" dataset in which there still potentially exist early type stars besides M-type stars. Commission. This research also makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
2019-02-09T21:34:29.000Z
2019-02-09T00:00:00.000
{ "year": 2019, "sha1": "6477823cbf875c895a7a19694f12337637001359", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/485/2/2167/28021265/stz458.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "6477823cbf875c895a7a19694f12337637001359", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
262756275
pes2o/s2orc
v3-fos-license
A short assessment of health literacy (SAHL) in the Netherlands Background An earlier attempt to adapt the REALM (Rapid Estimate of Adult Literacy in Medicine) word recognition test to Dutch was not entirely successful due to ceiling effects. In contrast to REALM, the Short Assessment of Health Literacy (SAHL) assesses both word recognition and comprehension in the health domain. The aim of this study was to design, test and validate a SAHL for Dutch patients (SAHL-D). Methods We pretested 95 health-related terms (n = 127) and selected 33 best performing items for validation in a quantitative survey (n = 329). For each item, a correct recognition (1 point) and comprehension (1 point) contributed to the total score (scale 0–66). Internal consistency was assessed using Cronbach’s alpha. Construct validity was examined by analyzing association patterns of SAHL-D with educational level, objective and subjective health literacy, prose literacy, and vocabulary. Receiver operating characteristic (ROC) curves, with prose literacy as the reference standard, determined optimal cut-off scores. Results Cronbach’s alpha was 0.77 for recognition, 0.79 for comprehension, and 0.86 for the total score. Scores significantly differed substantially by educational level. Association patterns mostly confirmed a priori expectations in direction and strength, thereby supporting the construct validity of the SAHL-D. The optimal cut-off scores for differentiating between adequate and low literacy lie between 52.5 and 55.5. A shorter SAHL-D version presenting 22 terms offers a comparable prediction performance. Conclusion The results provide positive evidence for the reliability and validity of the SAHL-D. The SAHL-D can be applied to analyze the role of health literacy in health and healthcare, and for the development and evaluation of targeted interventions. Electronic supplementary material The online version of this article (doi:10.1186/1471-2458-14-990) contains supplementary material, which is available to authorized users. Introduction In our current information society, individuals are increasingly required to participate in complex decision-making processes. For example, managing health and finances involves obtaining and processing complex information, and making decisions in interaction with domain experts such as physicians and financial planners. To succeed in these tasks, individuals need to be 'literate' in various ways. Rapid and reliable assessments of these literacy levels are needed, not only to help professional communicators, but also to study the effects of literacy deficiencies and evaluate literacy-focused interventions. This paper presents a new health literacy assessment for Dutch patients. Background In its general sense, literacy refers to the ability to read and write. At the basic level, this ability is associated with reading fluency and word recognition as measured by standard reading tests. At an advanced level, this ability is associated with vocabulary, i.e. knowledge of word meanings. Both word recognition and vocabulary are essential for reading comprehension [1]. A broader notion is adult functional literacy [2], which covers three subskills required in everyday life, independent of topic domains: prose reading, comprehending diagrams, and doing computations. The central skill when it comes to using health information seems to be prose reading, i.e. making sense of texts. This requires not only lexical knowledge, but higher-order processes such as contextual meaning construction as well. In addition to these general literacy concepts, there is a growing interest in domain-specific literacies, which has provided concepts such as financial literacy [3], media literacy [4] and health literacy (HL) [5]. The definitions of these concepts vary considerably. In the field of HL, broad conceptual definitions go hand in hand with specific operational definitions [6,7]. In a content analysis of the HL literature, Sørensen et al. [8] distinguished between accessing, understanding, appraising and applying health-related information. Nutbeam [9] proposed the following levels of HL: 1) basic reading and writing skills needed to understand health information (functional HL); 2) advanced cognitive, social and literacy skills needed to communicate about health (interactive HL); and 3) advanced cognitive, social and literacy skills needed to critically analyze and apply health information in one's own situation (critical HL). Valid and reliable measurement of HL is essential to investigate the impact of low HL on population health and healthcare use, to analyze the differential effectiveness of health interventions by HL level, and to develop, evaluate and implement effective evidence-based interventions targeting people with low HL. Clinical applications of HL assessment intend to enable clinicians to effectively adapt their communication strategies to patients with low HL. Brief and easy-to-use HL measures have been developed in English, including the Rapid Estimate of Adult Literacy in Medicine (REALM) [10]. Fransen et al. [11] adapted the REALM by translating the 66 English words into Dutch (REALM-D) [11]. Although the REALM-D proved to be feasible and reliable, it did not differentiate between intermediate and higher education levels. Of these latter groups, the proportions correct were high (94% and 97%, respectively) and even the low-educated group scored 87%, suggesting that the test suffers from a ceiling effect. Interestingly, Nurss et al. [12] and Lee et al. [13] had similar experiences in constructing a Spanish version of REALM: highly skewed distributions with a large majority of the scores being ≥ 90% [12,13]. Nurss et al. [12] explained this by pointing out that Spanish has a more regular correspondence between graphemes and phonemes (letters and sounds) than English, so that Spanish words are relatively easy to pronounce. To overcome this problem, Lee et al. [13] introduced a semantic component in their word-based test. First, they developed the SAHLSA (Short Assessment of HL for Spanish-speaking Adults), which was later supplemented by an English version (SAHL-E) [13,14]. For every term, the participant has to choose between two words, of which only one is meaningfully related to the term. To use an example from the later English version SAHL-E, kidney had to be associated with either urine or fever. In order to receive one point for an item, both the pronunciation and the association had to be correct. The SAHLSA produced a more balanced score distribution, was reliable and unidimensional, and correlated well (Pearson 0.65) with the Test of Functional Health Literacy in Adults (TOFHFLA). Lee et al. also presented an 18-item version of the SAHLSA [14]. Since Dutch resembles Spanish in its relatively transparent orthography, adding a semantic component to a pronunciation task is assumed to produce a more powerful Dutch HL measure than the REALM-D. The aim of this study was to design and test a SAHL for Dutch patients (SAHL-D), as well as to validate it against various other literacy measures, including a prose comprehension test. Pretest The authors HPM and MF selected 95 candidate SAHL-D terms from a Dutch thesaurus of health terms http:// www.thesauruszorgenwelzijn.nl [15], of which 20 were related to medical specialties, tests and treatments (e.g. oncology, defibrillation), 15 to bodily functions and health behaviors (e.g. biorhythm, hygiene), 25 to the human body (e.g. pigment, pancreas) and 35 to diseases and symptoms (e.g. embolus, hemophilia). The chosen terms were potentially relevant to a general public. We avoided acronyms and terms referring to phenomena only known to medical professionals and particular patient groups. All terms were provided with a correct and an incorrect association word, using medical dictionaries when necessary. For example, 'hemophilia' could be associated with 'clotting' (correct) or 'immunity' (incorrect). The target word, the two associates and a 'Do not know' option were presented on cards, using large print. Potential participants for the pretest were approached by undergraduate students (Language and communication) in the waiting room of the outpatient clinic of Internal Medicine at a large university hospital. Inclusion criteria were aged ≥ 18 years and able to communicate in Dutch. Those willing to participate signed an informed consent form, filled in a questionnaire and participated in a personal interview with one of the students. The questionnaire assessed general vocabulary skills based on a written multiple choice vocabulary test used in the 8th grade of Dutch pre-vocational secondary education [16]. Each item presents a sentence with one word underlined; the respondent has to choose the correct meaning of that word from the four possible meanings that are offered. In the personal interview, the SAHL-D was administered by handing the participant the 95 cards, one by one. Word recognition was assessed by asking the participant to read the word out loud. The instructions for students contained information on correct phonetic pronunciation and the correct stress of each syllable in each word. Word comprehension was assessed by asking participants to choose the correct word associated with the 'target' word, or to use the 'Do not know' option; participants were encouraged not to guess the answer. In the pretest we analyzed item scores and distributions of proportions correct to select the items with the best discriminative ability. Reliability of the set of 95 items was analyzed by Cronbach's alpha. Analyses of variance (ANOVA) were used to assess relations between educational level and scores. The feasibility was assessed by noting the administration time for a subset of participants. Finally, we examined whether word features (such as opaque orthography and corpus frequency) were related to recognition and comprehension of each word. Main study We selected a subset of the pretest item pool by rejecting items that were scored correctly for recognition or comprehension by at least 95% of the participants. This left 33 items that mainly refer to medical specialties, tests and treatments on the one hand, and diseases and symptoms on the other (Additional file 1). Most of the terms referring to body parts, bodily functions and health behaviors did not meet the inclusion criteria. We then constructed a more demanding semantic test component. To assess word comprehension, instead of presenting 2 associated words we decided to present 3 candidate meanings of each word (1 correct, 2 distractors), together with a 'Do not know' option. As illustrated in Additional file 2, each item presents a distractor that is more or less related and a distractor that more obviously incorrect. Whereas the semantic test component in the pretest measured 'surface-level familiarity' (knowing which notions are related to the term and which are not), the SAHL-D aims to tap into 'concept-level familiarity' (knowing what the term actually refers to) [17]. Participants for the validation study were drawn from a test panel of The Netherlands Institute for Health Services Research, which is a list of people who are periodically invited to participate in various health-related research studies [18]. Inclusion criteria were age 18-75 years, and ability to read, write and converse in Dutch. Participants were approached by mail with an online questionnaire; participants were asked to indicate whether they were willing to participate in a telephone interview later on. Only data of consenting participants were used. The following variables were assessed in the online questionnaire: -Background characteristics: Gender; age; educational attainment level; ethnic background; native language; whether they work(ed) in health care; and how often they had contact with a professional care provider in the past year. Following the International Standard Classification of Education [20] was used to assess subjective health literacy. The HLS-EU was derived from a theoretical model that integrates health care, disease prevention and health promotion, and four information processing stages (access, understand, appraise and apply) related to health-relevant decision-making and tasks [8]. The HLS-EU-Q16 consists of 16 items scored on a 4point scale (very difficult to very easy). For each item the option 'Do not know' was also provided [20]. In a telephonic interview, NVS-D and SAHL-D were administered. These tests were sent as pdf files by email, not beforehand but upon starting the interview. As soon as the mail arrived, the participant started working on the NVS-D, followed by SAHL-D. -Newest Vital Sign (NVS): The NVS is a 6-question tool to assess an individual's ability to find and interpret information (both text and numerical information) on an ice cream nutrition label [21]. Earlier, Fransen et al. [11] translated and tested the NVS in Dutch (NVS-D); the cross-cultural adaptation and validation of the NVS-D is submitted for publication. During the interview, we sent one file with the ice cream label and another one with the questions; respondents were asked to open both files on their screen. The interviewer read the questions out loud while the respondents read the questions and looked at the label on their screen. -SAHL-D: SAHL-D started with a title page and provided a single word per page, with the candidate meanings underneath it. The participant proceeded page by page. The item order was kept on, except in rare cases when words were skipped accidentally (by pressing the arrow button more than once). In those cases, the interviewer steered the participant back to the omitted word after the current item has been completed. At any time of the test, the participant saw only a single target word on the screen. Upon opening a new page, participants were given 5 seconds to pronounce the word, after which a multiple choice option was to be chosen immediately. This procedure practically rules out the possibility of using dictionaries. The participants worked alone (possible consultations with others would have been overheard). Administration of the SAHL-D took (on average) 6.39 min. In the validation study we assessed the proportions of correct answers and score distributions of the SAHL-D. Feasibility was assessed by calculating percentage refusals and acceptance and the time to complete the SAHL-D. Reliability was tested with Cronbach's alpha. To explore the possibility of a shorter SAHL-D, we created an item subset by first discarding recognition items with rest-item correlations of ≤ 0.10 in the 33-item reliability analysis and/or a proportion correct of ≥ 0.95. This left 22 recognition items. We included the shorter 22-item set (SAHL-D22) in the analyses to illustrate the potential for a briefer SAHL-D. Construct validity was examined by analyzing association patterns of the SAHL-D, NVS-D, HLS-EU-Q16, educational level, prose literacy and vocabulary scores in relation to predefined expectations about the size and pattern of the associations. The following hypotheses were formulated: -Regarding known-groups validity, we expected the SAHL-D to be able to distinguish between low, intermediate and high levels of education based on significant differences in the mean scores. -Because of partly overlapping constructs, we expected a strong correlation between general vocabulary, prose literacy, NVS-D and the SAHL-D. -We expected a significant (but not sizeable) correlation between the SAHL-D (objective measure) and the HLS-EU-Q16 (subjective measure). -Regarding associations with socio-demographic variables, earlier literacy research [22,23] led us to expect a strong positive association between the SAHL-D and educational level, and a moderate negative correlation between SAHL-D and age; no significant gender difference was expected. ANOVA pairwise comparisons with Bonferroni correction were used for multiple testing to test differences in the SAHL-D scores by educational level, age, gender, and profession (working in health care). The association between the SAHL-D with general vocabulary, prose literacy, NVS-D, and HLS-EU-Q16 was tested with Pearson's correlations and stepwise linear regression analyses to correct for background variables. We used receiver operating characteristic (ROC) curves with adequate prose literacy as the reference standard to determine optimal cut-off scores for identifying objective HL. Pretest Of the 127 patients participating in the pretest, 51% was male, 20% had a low and 34% had an intermediate educational level; the age range was 20-85 years with a mean of 50.4 (SD 14.4) years. On average, the 95-word test took 9 min. The recognition task proved to be relatively easy, with a mean proportion correct of 0.93. Of the 95 words, 5 were correctly pronounced by all participants and another 53 items were correct for ≥ 95% of the participants. Cronbach's alpha for the recognition test was 0.94. The comprehension test was of similar difficulty (mean proportion correct 0.90). Of the 95 items, 4 were correctly scored by all participants and another 40 items were correct for ≥ 95% of the participants. Cronbach's alpha for the comprehension test was 0.93. The correlation between recognition performance and comprehension performance was 0.83 (Pearson r). Correlations between SAHL-D recognition and comprehension with general vocabulary were similar, i.e. 0.66 and 0.57, respectively. The total correct score for the candidate items varied with educational level, although the effect size was modest (F [2,122] = 4.49, p < 0.05; eta 2 = 0.069). Main study We aimed to include 300 participants in the validation study. In total 2000 individuals were invited to participate in an online survey and telephone interview; of these, 1037 filled in the questionnaire of which 595 agreed to be contacted by telephone and of which 329 finally participated in the personal interview. No significant difference in educational level was found between participants and non-participants. Mean age of participants was 56.2 years compared with 49.3 years for nonparticipants (p < 0.05). There was a significant difference in gender between participants and non-participants: 41% of the participants was male compared with 50% of the non-participants (p < 0.01). Table 1 presents the characteristics of the participants in the validation study, as well as the proportions correct for recognition and comprehension. The grand means for proportions correct were 0.89 for recognition and 0.80 for comprehension (compared with 0.93 and 0.90, respectively, for the candidate item set in the pretest). Women had higher comprehension and total SAHL-D scores than men. Significant differences were found in the scores for age, education level and profession in health care. The effect of educational level on the total scores (F[2,320] = 13.82, p < 0.001; eta 2 = 0.183) was more robust than for the pretest item set. Cronbach's alpha's for SAHL-D recognition, comprehension and total were 0.77, 0.79 and 0.86, respectively; for SAHL-D22, these alpha's were .74, .73 and .83 respectively. Table 2 shows the correlations between SAHL-D22, SAHL-D33, general vocabulary, prose literacy, NVS-D, and HLS-EU-Q16. SAHL-D and SAHL-D22 showed substantial correlations with prose literacy, vocabulary and NVS-D. The total SAHL-D and SAHL-D22 scores show higher correlations with the other literacy measures than the recognition scores or comprehension scores by themselves do. Hence combining recognition and comprehension components adds precision to literacy measurement. Another indication that recognition and comprehension provide different information lies in their correlation (.63), which is substantial but far from perfect. The lowest correlations in Table 2 were those involving the HLS-EU-Q16. Table 3 shows that the associations between the SAHL-D and prose literacy (model 1), vocabulary (model 2) and NVS-D (model 3) remained significant after correction for differences in educational level, age, gender, and working in health care. The association between SAHL-D and subjective HL disappeared after those adjustments (model 4); the association between SAHL-D and educational level remained significant after adjustment for age, gender and working in health care (model 5). We determined the potential of the SAHL-D and SAHL-D22 to correctly identify individuals with adequate and inadequate HL. Inadequate literacy was defined as a prose literacy correct score of 6 or lower. This threshold was chosen to be well below the mean correct score for the lowest educational level (8.3); under this definition, 18% of the participants is inadequately literate. The area under the ROC curve was 0.80 (CI 0.73-0.88) for SAHL-D. In the various uses of SAHIL, we may choose different cutoffs, i.e. the SAHL-D score below which the test taker is considered to be inadequately health literate. High cut-offs help to correctly identify low literacy (as not many of the low-literacy participants reach the threshold), but are not useful in identifying adequate literacy levels as many literate participants do not reach the threshold either. Reversely, low cut-off points better identify adequately literate individuals, but fare badly in detecting low literacy, as a considerable number of low-literacy participant outscore the threshold. Optimal cutoffs are to be found in the middle of the curve. For example, a cut-off score of 52.5 would correctly classify 66% of the test takers with inadequate HL as such and 86% of the test takers with adequate HL. For a cut-off value of 54.5 these values are 74% and 76% respectively; a cut-off of 55.5 gives values of 80% and 69%. While a high detection rate for low literacy seems preferable, higher cutoffs also imply larger numbers of false positives (i.e. people incorrectly 'diagnosed' with low literacy). The final cutoff choice depends on the use of the test, and the priorities in a given setting, especially the estimated costs of false-positive and falsenegative results. Discussion Like other objective HL measures, the SAHL-D remains close to the basic literacy concept. The REALM [10] and Medic Achievement Reading Test (MART) [24] check the pronunciation of words. The Test Of Functional Health Literacy in Adults (TOHFLA) [25] uses cloze testing of short text passages and numeracy tasks, and the NVS [21] asks questions related to the comprehension of a nutrition label. All these measures were validated against equally basic measures, often other word recognition and cloze tests. The narrow scope of operational HL measures is not surprising. First, HL measures are often designed in response to the practical demand for tests that can be quickly administered. Second, activities such as accessing, appraising and applying information are harder to test objectively than understanding information, i.e. they are generally examined by means of self-assessment questions. Although Pander Maat & Lentz [26] found a substantial correlation between a health-vocabulary test and success in answering questions about medicine information leaflets, the relation between general and domain-specific literacy is still unclear. As prose (and document) literacy provide the ability to acquire new knowledge where needed, and individuals will often need to process new medical information, a general literacy test seems to be a sensible indication of HL. Nevertheless, from a face validity point of view, it is advisable to use health-related stimuli in literacy tests administered in the health domain. Furthermore, as argued by Baker [27], the distinction between general reading fluency and health-related reading fluency is important for research because a health-related literacy measure is likely to be more closely related to health outcomes than a general literacy measure. A strength of this study is that the SAHL-D was based on a careful selection and pretest of health-related words that are frequently applied in The Netherlands. Considerable effort was required to find items that were sufficiently demanding for the test, given that Dutch has a fairly transparent orthography; this may explain why the earlier REALM-D test was less successful. Furthermore, adding a comprehension component to the test yielded more discriminative power, at least in the more demanding format used in the main study. A limitation of the present study is that, in the validation study, the sample was restricted to persons able to write and speak Dutch and having access to internet. This probably means that on average, our research sample is somewhat more literate than the general population. Therefore, we recommend that the SAHL-D be implemented in various clinical contexts and different populations to further investigate its reliability and validity. Another limitation is that there is no objective (health) literacy test available in Dutch. We therefore used an item sample taken from prose literacy tests used in Dutch higher secondary education. Since cut-off points were not available for these items, we defined adequate and inadequate prose literacy with reference to the mean proportion for the lowest educational group. Conclusion The SAHL-D represents a new HL assessment tool in Dutch, consisting of a recognition and comprehension test for 33 (or 22) health-related words. The results of the first validation study provide positive evidence for the reliability and validity of the SAHL-D. As hypothesized, we found a strong correlation between SAHL-D with general vocabulary, prose literacy and the NVS-D; substantial correlations were found between all literacy measures, ranging from 0.53-0.61. We expected a significant (but not sizeable) correlation between the SAHL-D and the HLS-EU-Q16, since HL is subjectively measured in the HLS-EU and the SAHL-D is an objective measure; in fact a lower correlation was found between the SAHL-D and the HLS-EU-Q16, that was not significant after correction for educational level and other background variables. As expected we found a significant correlation between the SAHL-D and educational level and age; the correlation with education being stronger than that with age. All these results support the construct validity of the SAHL-D. After adjustment for educational level, age was no longer significant in the regression model, indicating that differences in age could be explained by differences in educational level. Although we did not expect gender differences in SAHL-D scores, our regression analyses found that women scored higher than men, also after correcting for age and educational level. As our general vocabulary and prose literacy scores show no gender differences, this difference seems to be specific to the health domain. Discussion of related evidence can be found in Peerson & Saunders [28]. In conclusion, our results indicate that the SAHL-D is a valid Dutch-language measure of functional HL that
2017-04-04T08:05:13.526Z
2014-09-23T00:00:00.000
{ "year": 2014, "sha1": "529ee2e7726a783b6660c0697ece441da9260fc7", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/1471-2458-14-990", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b27b62434976294867b6b481492bcf049097c5d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
118309058
pes2o/s2orc
v3-fos-license
Optimizing a spin-flip Zeeman slower We present a design of a spin-flip Zeeman slower controlled by a fast feedback circuit for a sodium Bose-Einstein condensate apparatus. We also demonstrate how the efficiency of the slower strongly depends on its intrinsic parameters, and compare these observations with a few theoretical models. Our findings lead to a simple three-step procedure of designing an optimal Zeeman slower for neutral atoms, especially for those atomic species with high initial velocities, such as sodium and lithium atoms. I. INTRODUCTION Laser cooling and trapping neutral atoms with a magneto-optical trap (MOT) has become an important step in a successful production of ultracold quantum gases [1]. To improve the capture efficiency of a MOT, a number of slowers were invented to slow hot atoms before they overlap with the MOT [1][2][3][4][5][6]. Atoms and a resonant laser beam counter propagate in a slower. The longitudinal velocity and corresponding Doppler shift of these atoms decrease after they absorb resonant photons. After a few such absorptions, these slowed atoms are no longer resonant with the laser beam and thus cannot be further slowed down. In order to continuously slow atoms along the laser beam path, one can vary the laser frequency accordingly as with the frequency chirp method [7] or by using broadband lasers [8]. Another widely-used method is to compensate differences in the Doppler shift with a spatially varying magnetic field generated by a Zeeman slower while keeping the laser frequency unchanged [1,[3][4][5][9][10][11][12][13]. In this paper, we present the design and construction of a spin-flip Zeeman slower controlled by a fast feedback circuit for a sodium Bose-Einstein condensate (BEC) apparatus. An efficient method of optimizing a slower with our simulation program and by monitoring the number of atoms trapped in the MOT is also explained. In addition, our data demonstrates how the efficiency of a slower strongly depends on a few of its intrinsic parameters, such as the intensity of the slowing laser beam and the length of each section in the slower. These findings result in a simple three-step procedure of designing an optimal Zeeman slower for neutral atoms, especially for those atomic species with high initial velocities, for example lithium atoms. II. EXPERIMENTAL SETUP A sodium beam is created by an oven consisting of a half nipple and an elbow flange. A double-sided flange * lichao@okstate.edu with a 6 mm diameter hole in the center is used to collimate the atomic beam. To collect scattered atoms, a cold plate with a 9 mm diameter center hole is placed before a spin-flip Zeeman slower and kept at 10 • C with a Peltier cooler. Our multi-layer slower has three different sections along the x axis (i.e., a decreasing-field section, a spin-flip section, and an increasing-field section), as shown in Fig. 1(a). Compared with the single-layer Zeeman slower with variable pitch coils [11], this multilayer design provides us enormous flexibilities in creating large enough B for slowing atoms with high initial velocities (e.g., sodium and lithium atoms). The first section of our Zeeman slower produces a magnetic field with decreasing magnetic field strength B. We choose B ∼ 650 Gauss at the entrance of the slower, so the slowing beam only needs to be red-detuned by δ of a few hundred MHz from the D2 line of 23 Na atoms. This frequency detuning is easily achieved with an acousto-optic modulator, but is still large enough to avoid perturbing the MOT. The spin-flip section is simply a bellow as to maintain B = 0 for atoms to be fully re-polarized and also to damp out mechanical vibrations generated by vacuum pumps. The increasing-field section creates a magnetic field with increasing B but in the opposite direction to that of the decreasing-field section, which ensures the magnetic field quickly dies off outside the slower. This slower can thus be placed close to the MOT, which results in more atoms being captured. The MOT setup is similar to that of our previous work [14] and its maximum capture velocity v c is ∼ 50 m/s. To precisely adjust B inside the slower, all layers of magnetic coils are divided into four groups, and different layers in each group are connected in series and controlled by one fast feedback circuit. A standard ring-down circuit consisting of a resistor and a diode is also connected in parallel with each coil for safely shutting off the inductive current in the coil. An important chip in our control circuit is a high power metal oxide semiconductor field effect transistor (MOSFET) operated in the linear mode. We use a number of MOSFETs connected in parallel and mount them on the top of a water-cooled cold plate to efficiently cool them. A carefully chosen resistor R s of 50 mΩ is connected to each MOSFET's source terminal to encourage equal current splitting among the MOS-FETs in parallel. R s also limits MOSFET's transcon- ductance to a narrow range, which enables our feedback circuit to control both low and high electric currents with a single set of gains. The design of this feedback circuit is available upon request. III. OPTIMIZATION When neutral atoms pass through the Zeeman slower, only those atoms with a longitudinal velocity v(x) = −[2πδ + µB(x)/ ]/k are resonant with the slowing beam and can be slowed. Here µ is the magnetic moment, is the reduced Planck's constant, k and δ are the wavevector and frequency detuning of the laser beam, respectively. The actual deceleration a s = ηa max in the slower is thus given by where η is a safe factor to account for magnetic field imperfections in a given slower and the finite intensity of the slowing laser beam, and a max = kΓ/2m is the maximal achievable deceleration. m and Γ are the mass and the natural linewidth of the atoms, respectively. Our largest MOT is achieved when we match B(x) inside the slower as precisely as possible to a prediction derived from Eq. 1 with η being set at 0.65 in decreasingand increasing-field sections and v f = 40 m/s, as shown in Fig. 1(b). Here v f is the velocity of atoms at the end of the slower. N , the number of atoms in a MOT, can also be boosted by a larger atomic flux with a hotter atomic oven. However, this is generally not a favorable method due to two reasons. First, a hotter oven generates atoms with higher initial average velocities, but a slower can only handle entering atoms of a certain maximum velocity. Second, alkali metals' consumption rates sharply increase with the oven temperature. We find that convenient parameters to adjust are slowing beam's intensity I and frequency detuning δ, electric current i in each magnetic coil, and the length of each section of the slower. These parameters, however, cannot be optimized independently since there is a strong correlation among them. In order to efficiently optimize the slower, we developed a computer program to simulate B(x) and compared the simulation results to the actual magnetic field strengths in the slower under many different conditions (e.g., at various values of i). The actual magnetic field strengths were measured with a precise Hall probe before the slower was connected to the vacuum apparatus. The agreements between the simulation results and the measurements are good, i.e., the discrepancies are < 5 %. This simulation program can thus mimic the actual performance of a Zeeman slower and provide a reasonable tuning range for each aforementioned parameter, which allows for efficiently optimizing the slower. Our simulation program is available upon request. One common way to evaluate a slower's performance is from knowing the exact number of atoms whose velocities are smaller than v c with a costly resonant laser. In the next four subsections, we show that a convenient detection method of optimizing a slower is to monitor the number of atoms captured in the MOT, i.e., more atoms trapped in a given MOT setup indicate a better performance of the slower. A. Intensity of the slowing beam Figure 2 shows that the MOT capture efficiency strongly depends on the slowing beam power P , i.e., N increases with P and then stays at its peak value N max when P is higher than a critical value. This can be understood from the relationship between the safe factor η of a slower and the slowing beam intensity I. Based on Ref. [13], the safe factor η laser at a finite I can be expressed as, 1. where I s is the saturation intensity of neutral atoms, e.g., it is 6.26 mW/cm 2 for sodium atoms. Eq. 2 implies that the safe factor of any optimal Zeeman slower has an upper limit, η max laser , as long as the slowing beam intensity is fixed. In other words, a bigger η in the decreasing-or increasing-field section does not always lead to a more efficient Zeeman slower if I is given. For a Gaussian slowing beam with a width W , its intensity I can be expressed as I(r) = I 0 · e − 2r 2 W 2 . Here r is the distance away from the slowing beam center and I 0 = 2P/(πW 2 ) is the beam intensity at r = 0. N can be given by, Here H[r] is a unit step function of r to account for the fact that atoms can be efficiently slowed only when η ≤ η max laser (r), and R 0 is the radius of the atomic beam. Figure 2 shows that our data taken with η = 0.65 in decreasing-and increasing-field sections can be well fitted by Eq. 3 and N saturates at P ≥ 70 mW. This indicates that 70 mW is the minimum power to achieve η = 0.65 for our apparatus. We can thus define a η p , the preferred η in decreasing-and increasing-field sections at a given P , and derive its expression from Eq. 3 as follows, The predicted η p as a function of P for our apparatus is shown in the inset in Fig. 2, which implies P sharply increases with η p and is infinitely large at η p = 1. Therefore, the first step in designing an optimal Zeeman slower is to determine η p from Eq. 4 based on the available slowing beam power. B. The decreasing-field section To focus on the decreasing-field section, our data shown in this subsection are taken with η being set at 0.65 in the increasing-field section, a fixed distance between the atomic oven and the MOT center to maintain a fixed solid angle for an atomic beam, δ = −512 MHz, and P = 70 mW which implies η p is 0.65. Based on the discussions in Refs. [3,9], N can be expressed as k B T is the initial number of atoms created by the oven, k B is the Boltzmann constant, and the oven temperature T is set at 530 K in this work. f (v) is a correction factor to account for the transverse velocity distribution of slowed atoms after they absorb resonant photons in a Zeeman slower, which can be expressed as Here r mot is the radius of a MOT, L is the distance between the MOT center and the end of a Zeeman slower, v r is the recoil velocity of an atom in a slowing process, and v r v/3 is the transverse velocity of slowed atoms [9]. In Eq. 5, v max is the maximum velocity of entering atoms which can be handled by a slower. For a spin-flip Zeeman slower, v max is given by where L d and η d are the length and the actual safe factor of the decreasing-field section, respectively. And v sf = 2πδ/k is the velocity of atoms which are resonant with the slowing beam at the spin-flip section, since B is zero in this section. For example, v sf is 302 m/s at δ = −512 MHz in our sodium BEC apparatus. For a given δ, Eqs. 5-7 predict that N should monotonically increase with v max , i.e., N increases with η d at a fixed L d (or N increases with L d at a fixed η d ). However, our observations shown in Fig. 3(a) are drastically different from this prediction: at each L d studied in this paper, N appears to first increase with η d , reach its peak N max at a critical value of η d , and then decrease with η d . Based on Eq. 7, we can also plot these data points as a function of v max , as shown in Fig. 3(b). At each value of L d , the agreement between our data and a theoretical prediction derived from Eqs. 5-7 (dotted black line) can only be found when v max is smaller than 800 m/s, which is approximately equal to the mean velocity ( 9πk B T /8m ) of atoms entering our slower. In addition, we plot ∆N/∆L d as a function of η d in Fig. 3(c), where ∆N/∆L d represents the number of extra atoms in the MOT gained from elongating the decreasing-field section by ∆L d = L d − 0.15 m. Figure 3(c) shows that ∆N/∆L d drops to a much smaller value when L d is increased from 0.3 m to 0.48 m. This implies the ideal length of the decreasing-field section for our apparatus should be longer than 0.3 m and shorter than 0.48 m. It is worth noting that ∆N/∆L d peaks at η d ∼ 0.65 for each value of L d , as shown in Fig. 3(c). Interestingly, the predicted η p from Eq. 4 is also 0.65 at P = 70 mW, and Fig. 3(a) shows η d ∼ 0.65 is also the position where the largest N occurs. This indicates that η of our optimized decreasing-field section is actually equal to η p . Therefore, one important procedure in designing an optimal Zeeman slower is as follows: 1) find out η p based on Eq. 4 from the available slowing beam intensity; 2) determine the length of the decreasing-field section from Eq. 7, i.e., L d = [9πk B T /8m − (2πδ/k) 2 ]/(2η p a max ); 3) find out electric currents i of decreasing-field coils with our simulation program by precisely matching the simulated B to a prediction derived from Eq. 1. C. The increasing-field section The aforementioned discussion on the decreasing-field section applies to the increasing-field section as well. To only study the increasing-field section, data shown in this subsection are taken at η d = η p = 0.65, L d = 0.54 m, and P = 70 mW. Our data in Fig. 4(a) shows that N is not a monotonic function of i at a given δ. N first increases with i and reaches a peak at a critical value i c , because a higher i leads to a larger deceleration which means more atoms can be slowed and captured in the MOT. When i is higher than i c , we find that N sharply decreases with i due to atoms being over-slowed. Because v f = −(2πδ + µB iMax / )/k, the data points in Fig. 4(a) can be converted to reveal the relationship between N and v f , as shown in Fig. 4(b). Here B iMax is the maximum magnetic field strength created by the increasingfield section at a fixed i. We find four interesting results: velocity of the atoms captured in the MOT appears to be ∼ 20 m/s; the maximum value of N does not depend on δ; and i c linearly increases with δ, as shown in Fig. 4. Therefore, in addition to the procedures listed in Sections 3.A and 3.B, another useful procedure in designing an optimal spin-flip Zeeman slower is as follows: 1) choose a convenient δ, for example, δ is around −500 MHz for sodium or lithium atoms; 2) find out the ideal length of the increasing-field section from the following equation, Fig. 4(a) and then set the current i at a value slightly smaller than i c in the increasing-field coils. D. The spin-flip section We have also studied the contribution of the spin-flip section, but have not found a strong correlation between the slower's efficiency and L sf , the length of the spinflip section. Figure 5 shows that N always peaks at η d ≈ η p = 0.65 for three different values of L sf , when L d is kept at 0.54 m, η is set at 0.65 in the increasingfield section, δ is −512 MHz, and P is 70 mW. This figure also implies that a longer spin-flip section does not boost the number of atoms captured in the MOT, as long as its length L sf is sufficient to fully re-polarize atoms. A very long L sf , however, has a negative effect on the MOT capture efficiency, because it also unavoidably reduces the solid angle of the atomic beam when all other parameters of the system remain unchanged. IV. CONCLUSIONS We have presented the design and construction of a spin-flip Zeeman slower controlled by a fast feedback circuit, and demonstrated an efficient method to optimize a slower by using our simulation program and monitoring the number of atoms trapped in the MOT. Our data suggests that an optimal Zeeman slower may be designed with the following procedures: 1) determine η p based on Eq. 4 from the available slowing beam intensity; 2) choose a convenient δ and find out the ideal lengths of the increasing-and decreasing-field sections from L i = [(2πδ/k) 2 − v 2 c ]/(2η p a max ) and L d = [9πk B T /8m − (2πδ/k) 2 ]/(2η p a max ), respectively; 3) set i at a value slightly smaller than i c in the increasingfield coils and determine i of decreasing-field coils with our simulation program. We have found that a longer spin-flip section does not boost the number of atoms captured in the MOT, as long as its length L sf is sufficient to fully re-polarize atoms. These conclusions are very useful in designing an optimal Zeeman slower for other atomic species, especially those with high initial velocities, for example lithium atoms.
2014-01-28T13:55:48.000Z
2014-01-28T00:00:00.000
{ "year": 2014, "sha1": "e348b1518a28d12d15e49ea76368755fc501f8d0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e348b1518a28d12d15e49ea76368755fc501f8d0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251508911
pes2o/s2orc
v3-fos-license
Fusidic Acid/Tea-Tree Oil Nanoemulsions: A Potentially Safe and Effective Anti MRSA/MSSA Topical Agent for Chronic Wound Healing Fusidic acid ( FA ) is clinically used as an antibacterial agent for the treatment of Gram-positive bacterial infections. It interferes with bacterial protein synthesis, specifically by preventing the translocation of the elongation factor G on the ribosome. In the present work, oil-in-water nanoemulsion (NE) was developed as a carrier for the transdermal delivery of FA . Different oils, surfactants and co-surfactants were screened. The solubility of FA , the emulsifying capacity of the surfactants and phase diagrams for each oil and surfactant mix were constructed. From the analysis, eight stable NE formulations were chosen, and their physicochemical properties were further evaluated. The antibacterial activity against methicillin-resistant Staphylococcus aureus (MRSA) and methicillin-sensitive Staphylococcus aureus (MSSA) were also evaluated, and cytotoxicity was conducted on HS-27 cell line to determine the safety of the formula. It was found that the NE produced from tea tree oil has the most optimal stability with promising antibacterial activity against MRSA as compared to a commercially available product. The safety profile of the NE was also comparable to the commercial product; thus, the formulated FA-NE is promising for clinical use. INTRODUCTION Chronic wound is the type of wound that do not progress through the normal healing process in a timely manner. The problem normally lies in the inflammation phase of healing, which is due to excessive levels of proinflammatory cytokines, proteases, reactive oxygen species (ROS) and the presence of persistent infection further complicates the treatment process (Frykberg & Banks 2015). The current treatment standard for chronic wound care includes systemic antibiotics and antiseptic solution, to overcome the deep-seated infections that is difficult to reach with simple topical application. Nanoemulsion (NE) is a type of emulsion with droplet sizes between 20-200 nm with narrow distributions. They are transparent or translucent with a bluish colouration. NE is obtained by mixing two immiscible liquids with an emulsifier, followed by an introduction of high energy techniques such as ultrasonication, or homogenisation. The nanodroplets produced usually have excellent kinetic stability (Abolmaali et al. 2011). There are many advantages associated with NE over conventional emulsions. In terms of physical stability, the internal phase droplet size distribution would not be affected by the dilution of the external phase (Sugumar et al. 2015). In addition, with respect to the biological activity, NE allows adequate localisation and skin penetration of active ingredients, which render them effective locally as compared to conventional emulsion systems (Eleraky et al. 2020). The oil phase may consist of natural or synthetic oils and lipids, such as medium or long-chain triglycerides or perfluorochemicals. Among the natural oils, plant-derived essential oils have garnered significant attention due to their insecticidal, anti-fungal and antibacterial properties with good safety profile (Sugumar et al. 2014). They could be a good option in the NE based antibacterial formulation as they may contribute synergistically to the effectiveness of the active ingredients incorporated in the formula (Panaitescu et al. 2018). Fusidic acid (FA) is a tetracyclic triterpenoid that is structurally linked to cephalosporin P1. It originates from the fungus Fusidium coccineum and differs from cephalosporin by the presence of three acetyl groups, which contributes to its enhanced antibacterial activity (Fernandes 2016). The FA nucleus has properties common to other tetracyclic structures such as the adrenocorticoids and bile salts, particularly the cholate and taurocholate (Godtfredsen et al. 1962). It is correlated to other antibiotic groups, including the helvolic acids and the viridominic acids. Antibiotics parallel or identical to FA are produced by dermatophytes such as Microsporum canis, Microsporum gypseum, and Epidermophyton floccosum (Perry et al. 1983). Fusidic acid has bacteriostatic activity against staphylococci, including both methicillin-sensitive and methicillinresistant strains, Neisseria spp., Bordetella pertussis, Corynebacterium spp., and Gram-positive anaerobes like Clostridium difficile, Clostridium perfringens, Peptostreptococcus spp. and Propionibacterium acnes (Frimodt-Møller 2010). NE has the potential to increase the effectiveness of chronic wound treatment by improving the absorption of FA through the wound to eliminate possible infection. In this current study, FA was employed as the antibacterial agent and in combination with essential oils with antibacterial activity, a NE was formulated as a potentially effective topical antibacterial formulation. Eight different oils (palm oil, sesame seed oil, lavender oil, orange oil, lemon oil, tea tree oil, eucalyptus oil and peppermint oil) were formulated in combination with FA as the active ingredient. The obtained NE were evaluated in terms of physicochemical properties and antibacterial activity against MRSA and MSSA strains, and cytotoxicity against human skin fibroblast HS 27 cells to prove the safety of the NE formulations. MATERIALS Pure FA powder was purchased from Sgonek Biological Technology Co. (China). Ethanol and methanol were purchased from QRec Asia (Malaysia). Tea tree oil, lavender oil, peppermint oil, lemon oil, orange oil, and eucalyptus oil were obtained from Soap Cart Co. (Malaysia) and palm oil was purchased from Sunlong Industrial and Trading Co. (China). Sesame seed oil and Span 20 were purchased from Moksha LifeStyle Products (India). Propylene glycol was purchased from Sigma-Aldrich (USA), polyethylene glycol 4000 was from Merck (Germany) while Tween 20, Tween 60, and Tween 80 were purchased from Euro-Chemo-Pharma (Malaysia). All reagents and chemicals used were of analytical grade. CALIBRATION CURVE OF FA IN ETHANOL Calibration curves of FA were prepared in ethanol using a UV/Vis spectrophotometer U-2800 (Hitachi, Japan). Briefly, 10 mg of FA was accurately weighed using a calibrated digital weighing balance (Ohaus, USA) and dissolved in 10 mL ethanol, producing 1 mg/mL FA solution. This stock solution was further diluted to produce a range of working concentrations between 2 and 20 µg/mL. The UV/Vis absorbance of each working solution was measured at 235 nm in triplicates and a standard calibration curve was constructed accordingly. DETERMINATION OF SATURATED SOLUBILITY OF FA The saturated solubilities of FA in different solvents, oils and surfactants were determined by adding an excess amount of drug (100 mg) into 2 mL of each medium. With regards to Poloxamer 407 and polyethylene glycol 4000, 1% w/v solution of each surfactant in water was prepared to evaluate the solubility of the drug. The mixtures were then placed on a mechanical shaker (Thermo Shaker, Hangzhou Allsheng Instrument Co., China) at room temperature for 72 h. Samples were centrifuged (Hettich, Germany) at 5000 rpm for 15 min and the concentration of FA in the supernatant was assayed using UV Vis spectrophotometer at 235 nm. All tests were done in triplicates and the data is presented as mean (± s.d.). SCREENING OF SURFACTANTS Six surfactants (Tween 20, Span 20, Tween 80, Poloxamer 408, propylene glycol and polyethylene glycol (PEG) 4000) were screened for their ability to produce nanoemulsion, as previously described by Azeem et al. (2009). Briefly 2 mL of surfactant solution was prepared and 5 μL of oil was added with vigorous mixing by using a vortex for 30 s. The mixture was observed for presence of turbidity. If a clear mixture was obtained, the addition of oil was repeated until the mixture turned turbid following vortex mixing. CONSTRUCTION OF TERNARY PHASE DIAGRAM Pseudo-ternary phase diagrams were prepared for three oils: Palm oil (PO), sesame oil (SO), and tea tree oil (TTO). The pseudo-ternary phase diagrams consisting of oil, water and surfactant mixtures with different hydrophilic-lipophilic balance (HLB) values were constructed through the water titration method. The ratio of surfactant to co-surfactant was fixed at 1:1 and the ratio between oil and surfactant mix (Smix) was screened from 0.5:9.5 to 9.0:1.0 (oil:Smix). Distilled water was added to the oil and Smix mixture in increments of 100 µL by micropipette at room temperature. The samples were mixed vigorously by a homogeniser for 2 min and kept at room temperature for 24 h to reach equilibrium before further addition of distilled water was made. The physical appearance of the mixture was observed after each addition of distilled water. The formation of nanoemulsion was identified as the transparent or translucent liquid. The results obtained were plotted on a ternary phase diagram using Mix-School 3.51 software (Gupta et al. 2011). PREPARATION OF FA-LOADED NANOEMULSION FA-loaded NE was prepared with the oil/Smix/water ratios that produced transparent NE as determined by the pseudoternary phase diagram. The aqueous phase was added to the oily phase (containing 2% w/v FA) under vortex for 30 s. The characteristics of the prepared NE were evaluated accordingly. THERMODYNAMIC STABILITY In brief, three types of mechanical and thermodynamic stresses were tested: Centrifugation, heating-cooling, and freeze-thaw cycle. Mechanical stress was applied through centrifugation at 3500 rpm for 30 min. Any change in physical appearances was recorded. The heating-cooling stress was done by storing the NE at 4 °C for 72 h, followed by storage at 40 °C for 72 h. This cycle was repeated three times and the homogeneity of the formulation was recorded. The freeze-thaw cycle was done by freezing the NE at -20 °C for 72 h, followed by storage at room temperature, which was also conducted for 72 h. This cycle was repeated three times and changes were noted. INVESTIGATION OF MICROMETRICS & ZETA POTENTIAL The average droplet size (Z-average) and PDI of the formulated nanoemulsion system were analysed by dynamic light scattering method using Litesizer TM 100 (Anton Paar, Austria) with light at a scattering angle of 173 ° at 25 °C. The value of ζ-potential was determined at 25 °C with an electric field strength of 23.2 V/cm. All measurements were reported as an average of three replicates (± s.d) (Zhu et al. 2015). DYE SOLUBILITY TEST Two drops of 2% methylene blue were added to the nanoemulsion and visual observation was conducted after 5 min using a light microscope. Oil-in-water (o/w) nanoemulsion will incorporate the dye rapidly whilst clumps could be observed under the microscope for the water-in-oil (w/o) nanoemulsion. Following this test, the nanoemulsion was diluted with distilled water to investigate the presence of any phase separation in the system. OPTICAL CLARITY (PERCENTAGE TRANSMITTANCE) The percentage of transmittance indicates the homogeneity and clarity of the nanoemulsion. Percentage of transmittance of the formulation was measured at 650 nm using UV-spectrophotometer against distilled water as blank. REFRACTIVE INDEX The refractive index of the nanoemulsion was determined using Abbes Brix Refractometer (Atago, Japan) and distilled water was employed as the standard. One drop of the sample was placed on the glass slide and the reading was taken accordingly. pH MEASUREMENT The pH value of the nanoemulsion was determined using a digital pH meter (Eutech Instruments, USA), calibrated by using a standard buffer solution at pH 4 and 7 before each use. DROPLETS The morphology of the NE was observed under a transmission electron microscope (TEM) (Hitachi High-Technologies Corp, Tokyo, Japan) at 100× magnification. The nanoemulsion was diluted 10 times and dropped on a copper grid coated with a carbon film, stained with 1% phosphotungstic acid (pH adjusted to 7.0) and air-dried before the analysis. IN VITRO ANTIBACTERIAL STUDY The antibacterial activity of the NE was evaluated in comparison to a commercially available 2% FA cream against MSSA and MRSA using the agar diffusion method. The agar was inoculated with log phase (McFarland 3) bacteria and 6 mm diameter holes were punched in the agar using a cork borer. The holes were filled with the NE formulations, formula without FA and the commercial FA cream. The plates were incubated at 37 °C for 24 h. The antibacterial activity was evaluated by measuring the diameter of the inhibition zone. All tests were done in triplicates. IN VITRO CELL VIABILITY ASSAY The cytotoxicity of the NE was assessed against human skin fibroblast HS-27 cells using MTT assay (Mosmann 1983). Cells were maintained in Dulbecco's Modified Eagle Medium (DMEM) supplemented with 1% Penicillin/Streptomycin (PAA Laboratories, Austria) with 5% fetal bovine serum. Briefly, the cells were plated at a density of 5000 cells/well and was incubated for 24 h at 37 °C in 5% CO 2 . After 24 h, cells were treated with NE formulations at different concentrations (0.025, 0.05, 0.1, and 0.2%) for 24 h. Following that, 3-[4,5-dimethylthiazol-yl]-2,5-diphenyltetrazolium bromide (MTT) was dissolved in PBS at 5 mg/mL, added to all wells, and the plates were incubated at 37 °C with 5% CO 2 for 4 h. The medium was discarded, replaced with 100 μL DMSO and the absorbance was measured at 570 nm using a microplate reader (Fisher Scientific, USA) (Latif et al. 2019). The percentage of cell viability was calculated in comparison to the untreated control. RESULTS AND DISCUSSION Choosing a stable and safe combination of ingredients for a formulation requires careful investigation. In this study, the authors aimed to prepare a NE formulation with FA as the active ingredient. As FA is commonly available in semi-solid preparations such as creams and ointments, it would be a promising approach to explore the possibility of formulating FA in the form of liquid, suitable for spraying on the wound for the prevention of microbial infection. The preparation of stable NE will provide greater flexibility in choosing suitable dosage forms for the final product. DETERMINATION OF SATURATED SOLUBILITY OF FA The saturated solubility of FA in different solvents, oils and surfactants are presented in Table 1. Comparing between the solvents, the solubility of FA is higher in ethanol (251.73±0.01 µg/mL) than methanol (187.19±0.55 µg/mL) or distilled water (5.198±0.35 µg/ mL). Solubilisation power of a particular solvent gives the quantitative estimates on its ability to solubilise a drug. It was reported that solubilisation power is correlated with the solvent's polarity, besides the molecular structure of the solute (Desai & Park 2004). FA is a weakly acidic molecule, which exists in water in the protonated form. However, the presence of the huge hydrophobic moiety in the structure (Figure 1) prevents complete solubilisation of FA in water. The solubility of FA in oil is an important factor in the selection of oil for a NE formulation. This is to prevent the precipitation of the active ingredient during production and storage. It was found that the solubility of FA is highest in sesame oil (89.08±5.2 µg/mL), followed by palm oil (72.62±5.2 µg/mL), and tea tree oil (57.71±1.2 µg/mL). Sesame oil is known to have a high content of unsaturated fatty acids than many other vegetable oils, whilst palm oil has a similar portion of saturated and unsaturated fatty acids. This unsaturated fat helps in the solubilisation of drugs, and a high content of unsaturated fatty acids would help in solubilising the drug molecules. Most essential oils such as lavender and peppermint oil have a low proportion of unsaturated fatty acids which lead to the low solubility of FA in these oils (Boateng et al. 2016). The solubility profile of FA in surfactants is important, as it will be pre-solubilised in the oil and surfactant mix prior to the emulsification process. The FA will stay in the oil droplets as solubilised form and this will prevent precipitation in the system. The incomplete or low solubility of a surfactant will lead to the leaching of a drug from the system, which subsequently leads to inefficient drug loading. In regard to the ability of surfactants to solubilise FA, Tween 80 has the highest solubilising ability (57.34±4.7 µg/mL) as compared to other surfactants. The choice of surfactant plays a vital role in the formation of a stable NE system. As surfactants may produce toxicity, choosing a safe surfactant at a suitable concentration is important. Non-ionic surfactants such as Tween and Span have shown lower cytotoxicity as compared to cationic or anionic surfactants, which could cause skin irritation following topical applications. Based on the solubility study of FA in the different oils, a pre-formulation study of the NE prepared from the three oils with the highest potential (sesame seed oil, palm oil, and tea tree oil) were conducted using six surfactants. The surfactants chosen were Tween 20, Span 20, Tween 80, Poloxamer 407, propylene glycol (PG) and polyethylene glycol (PEG) 4000. These are non-ionic surfactants from different groups and have different molecular structures and sizes. The solubility of oil in surfactant is important to determine the affinity of the surfactant to the oil used in the formulation. The greater amount of oil solubilised by the surfactant, the greater its nano emulsification capacity. This characteristic will also indicate the possibility of forming a NE by the system. Figure 2 shows the percentage of palm oil, sesame oil and tea tree oil solubilised by the six tested surfactants. Tween ® 80 and PEG 4000 showed the highest oil solubilising capacity for all three oils. CONSTRUCTION OF TERNARY PHASE DIAGRAM Pseudoternary phase diagrams were constructed to determine the quantity of each component needed to prepare a stable NE. Based on the experiment conducted on the solubility of FA, the following materials were used to construct phase diagrams; distilled water, oils (sesame oil, palm oil and tea tree oil), surfactants (Tween®80 and PEG 4000) and co-surfactants (PG, ethanol). Our attempts to produce NE from sesame oil and palm oil were not successful, in which all the different mixtures of oil, surfactants, co-surfactants and water produced gels or conventional emulsion. Only tea tree oil successfully produced NE. All the tea-tree based phase diagrams are included in Figure 3. The presence of higher content of fats in the sesame and palm oil as compared to the tea tree oil have led to difficulties in producing NE. Interestingly, El-Refai et al. (2019) reported the successful production of NE based on sesame oil, using Tween 80 as the surfactant. The authors used high energy techniques, which include high temperature and high amplitude sonication for 45 min. This may explain our results for not being able to produce sesame oil-based NE using the present method. The high-energy methods and long processing time may not be cost-efficient and may affect the stability of the drug incorporated in the NE (Tubtimsri et al. 2020). The percent of oil incorporated would also need to be reduced to enable dispersion of the oil droplets in the NE formulations. The pseudo-ternary diagrams gave ideas on the efficiency of the surfactant system to produce NE. The size of the NE area in each diagram is different, with the biggest shown by diagram C, D, G, and I. These diagrams corresponded to the presence of PEG 4000 or PG/ethanol is the Smix, suggesting the importance of these ingredients in the production of NE. It has also been reported that a reduction in oil concentration will reduce the NE area (Azeem et al. 2009). However, this reduction was not observed in this study, perhaps due to the insufficient reduction of oil percentage between formulations. From the pseudo ternary phase diagrams, 18 formulations were found to be stable, and they were subsequently loaded with FA. The formulations were then characterised for mechanical and thermodynamic stability. As presented in Table 2, only eight formulations showed good mechanical and thermodynamic stability. As stability towards temperature changes is an important criterion that differentiates between NE and microemulsions (ME) (Aswathanarayan & Vittal 2019), this becomes the determining point in this study in choosing the formulas to be brought forward for further analysis. Table 3 summarises the eight formulas. The quantity of FA encapsulated within the NE was calculated by using Formula 1 below. It was found to be within the range of 92-99%. Both formula that contained PEG 4000 and ethanol (FA-NE9 and FA-NE17) showed the highest encapsulation as compared to the others, and this may be attributed to the entanglement of the PEG 4000 that helped in the encapsulation of drug molecules within the NE droplets. The difference in the oil content (3.33% in FA-NE9 as compared to 7.14% in FA-NE17) however, did not show any influence on the drug content percentage between the two formulations. Table 4 summarises the characteristics of the NE produced by each formulation. (1) Droplet size, polydispersibility index (PDI), and zeta potential (ζ-potential) are the important characteristics of NE. The droplet size may be influenced by several factors that include the composition of materials and the preparation method, among others. As shown in Table 4, FA-NE1, FA-NE3, FA-NE5, FA-NE6, FA-NE9 and FA-NE10 gave oil droplet sizes of below 20 nm, which may be due to the low oil composition. FA-NE13 and FA-NE17 with the highest composition of oil showed significantly higher droplet sizes as compared to the others. This is attributed to the oil components concentration, as this was similarly obtained for both NE with PG (FA-NE13) and PEG (FA-NE17) as part of the Smix. ζ-potential is a measure of the magnitude of electrostatic or charge repulsion or attraction between particles and may serve as a partial indicator for the physical stability of the NE. The ζ-potential values differ between formulas, ranging from -16.1 to -24.7 mV (Table 4). ζ-potential was also found to be highly influenced by the type of surfactants used in the formula. The presence of PG or PEG 4000 reduces the surface charges. FA-NE1 which contains only Tween 80 showed a higher surface charge (-24.7 mV) as compared to the formulas with PEG 4000 and PG. This is due to the shielding of the droplets by the surfactants, causing the reduction in the absolute charge value (Devalapally et al. 2013). A high surface charge ensures a stable NE, as it creates a high-energy barrier against coalescence of the dispersed droplets ( (Kale & Deore 2016). In general, droplets with a surface charge of >+25 mV or <-25 mV will have a high stability, while those at lower dispersion values will be more susceptible to coalescence, which may lead to emulsion cracking or creaming (Shnoudeh et al. 2019). The surface charges of FA-NE formulas prepared in this study gave an initial idea that the risk of coalescence may be higher, and this was subsequently evaluated in the stability study. Optimised nanoemulsion showed PDI values of lower than 0.5. PDI of the NE droplets were aimed to be less than 0.5 to ensure that a monodisperse system would be obtained. Products with highly polydisperse droplets or internal phase present major hurdles in drug diffusion. In addition, polydispersity will also cause inconsistencies in drug absorbance and hence difficulties in predicting treatment response. The opacity of NE depends on their droplet size and would usually be transparent or slightly turbid. This optical property is usually determined by measuring light transmission. The FA-loaded NE obtained in this study showed light transmission between 81.37±0.47 % and 96.47±0.76% which means that the preparations are clear and transparent (Rokad et al. 2014). The clarity of NE could be estimated by measuring the refractive index of the formulations. In the present study, the refractive index of distilled water was used as a comparison to the prepared formulations. As shown in Table 4, the refractive index of the NE was between 1.3 and 1.4 at 25±0.5 °C. Hence, the formulations appeared nearly transparent in the visible spectrum and exhibited minimal light scattering effect (Rokad et al. 2014). Formulation for topical application should match the pH of the skin, which in general ranges from 4 to 6. Products with alkaline pH would tend to cause skin irritation and subsequently bacterial infections upon continuous usage (Teo et al. 2015). Lucero et al. (1994) suggested that topical products should have a pH of between 4 and 6.5 to avoid any skin irritation. The pH values of all formulations were within this suggested limit, except for FA-NE1 (pH 7.47), which was prepared with Tween 80 as a single surfactant. This means that the quantity of Tween 80 should be well moderated to ensure that the pH of the product is acceptable for topical application. Elfiyani et al. (2017) suggested that the presence of ethanol as co-surfactant to Tween 80 may reduce the pH of a microemulsion due to the oxidation of the alcohol into carboxylic acid (Maddela et al. 2017). However, this could not be corroborated in the current study. The pH value of FA-NE3 and FA-NE5 were not significantly different (p<0.05) although ethanol was present in FA-NE5. TEM analysis showed that the lipid emulsion droplets were almost spherical and homogenous (Figure 4). This finding confirms that the droplets dispersed in the NE system are in the nanometre range, between 30 and 100 nm. The antibacterial test was conducted against MSSA and MRSA and the results were compared to the commercially available fusidic acid cream. The effectiveness of all formulations in inhibiting the bacterial growth was determined through measuring the inhibition zone, as presented in Table 5. When compared to the marketed medication, fusidic acid nanoemulsion demonstrated a significant improvement (p < 0.05) and a larger zone of inhibition in antibacterial activity. Several factors may have contributed to these results. First, due to the nano-size of the droplets, NE has a bigger surface area and could penetrate deeper, hence improving the activity (Zhang et al. 2010). For instance, Marslin et al. (2015) tested the effect of Withania somnifera cream mixed with silver nanoparticles. They reported an increased in the penetration, which resulted in a greater suppression of bacterial growth. The inclusion of tea tree oil, which has been widely reported to possess anti-microbial activity against Gram-negative and Gram-positive bacteria and fungi, is also responsible in the improvement of bacterial inhibition by the formulated NE. Tea tree oil has been used in numerous skincare products for its antibacterial effects, which are principally due to the presence of terpinen-4-ol and à-caryophyllene (Farag et al. 2004).
2022-08-12T15:10:41.602Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "d545e78c94a9e572404508f00a9131cabcbe906c", "oa_license": null, "oa_url": "https://doi.org/10.17576/jsm-2022-5106-21", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e7efb844cd18c68f09d16a082a9e3e49a230e6f", "s2fieldsofstudy": [ "Medicine", "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
1518485
pes2o/s2orc
v3-fos-license
Case-control study on uveal melanoma (RIFA): rational and design Background Although a rare disease, uveal melanoma is the most common primary intraocular malignancy in adults, with an incidence rate of up to 1.0 per 100,000 persons per year in Europe. Only a few consistent risk factors have been identified for this disease. We present the study design of an ongoing incident case-control study on uveal melanoma (acronym: RIFA study) that focuses on radiofrequency radiation as transmitted by radio sets and wireless telephones, occupational risk factors, phenotypical characteristics, and UV radiation. Methods/Design We conduct a case-control study to identify the role of different exposures in the development of uveal melanoma. The cases of uveal melanoma were identified at the Division of Ophthalmology, University of Essen, a referral centre for tumours of the eye. We recruit three control groups: population controls, controls sampled from those ophthalmologists who referred cases to the Division of Ophthalmology, University of Duisburg-Essen, and sibling controls. For each case the controls are matched on sex and age (five year groups), except for sibling controls. The data are collected from the study participants by short self-administered questionnaire and by telephone interview. During and at the end of the field phase, the data are quality-checked. To estimate the effect of exposures on uveal melanoma risk, we will use conditional logistic regression that accounts for the matching factors and allows to control for potential confounding. Background Although a rare disease, uveal melanoma of the eye is the most common primary intraocular malignancy in adults, with an incidence rate of up to 1.0 per 100,000 person years (age-standardized, world standard) in Europe [1]. Only a few consistent risk factors have been identified for this disease. One set of uncommon risk factors include predisposing diseases like the dysplastic nevus syndrome, atypical ocular nevi, ocular and oculardermal melanocytosis [2,3]. Another set of host risk factors are ancestry, light skin and iris pigmentation [4][5][6]. In addition, a number of environmental factors including UV radiation [7,8] are weakly or inconsistently associated with uveal melanoma. Some uveal melanoma are associated with neurofibromatosis. However, the vast majority of familial cases reported are non-syndromic [9]. Some recent studies suggests that mutations in the breast cancer susceptibility locus, BRCA2 on chromosome 13, may be involved in the development of uveal melanoma [9,10]. Occupation may be also relevant, and may include chemical work [11,12], arc welding [8,12] and agriculture and farming work [13,14]. Two recent studies found an increased risk of uveal melanoma among cooks [8,15]. Electromagnetic waves with frequencies of 300 kilohertz (kHz) to 300 gigahertz (GHz) are called radio-frequency radiation. Typical occupational sources transmitting radio-frequency radiation in Germany include walkietalkies in the military and security services, in plants, radio sets on ships, transporters, freight trains, police cars and wireless phones including cellular phones (C-net: 450-465 MHz, since the 1990 ies D-net: 890-960 MHz and Enet: 1710-1800 MHz) and cordless phones (800-1900 MHz) with different modulation types. The population-wide introduction of analog and digital mobile phone techniques in the recent years, which has been coined as the mobile revolution [16], has resulted in an increasing number of people who fear that radio-frequency radiation (RFR) may have adverse health effects [17]. There is currently much uncertainty about the role, if any, of radio frequency transmitted by radio sets or mobile phones in human carcinogenesis. The assessment of the potential association of radio-frequency radiation and cancer risk is hampered by uncertainties about effective electromagnetic frequency ranges, the lack of a clear biological mechanism, as well as by difficulties of exposure assessment. Until now, the majority of epidemiological cancer studies focussed on brain cancer because the brain may be exposed to RFR [18][19][20]. With the exception of one study by Hardell et al. [21], all brain cancer studies showed no association between RFR as emitted by mobile phones and brain tumour risk until now. In contrast, the pooled analysis of two recent German case-control studies on the aetiology of uveal melanoma showed that frequent use of radiofrequency radiation devices including radio sets and mobile phones at the work place is associated with an about 4.2-fold elevated risk for uveal melanoma [22]. However, several methodological limitations including a small study size and a crude exposure assessment complicated the interpretation of these findings. Here we present the study design of an ongoing incident case-control study on uveal melanoma (acronym: RIFA study) that focuses on radiofrequency radiation as transmitted by radio sets and wireless telephones. We expect to publish the results of the study in summer 2005. Study questions The RIFA study is planned to answer several etiologic questions with a special focus on electromagnetic radiation especially radio-frequency radiation as emitted by mobile phones and radio sets. First, is the finding of an increased risk of uveal melanoma among subjects with frequent use of RFR devices reproducible? Second, if there is an association, is this association site-specific in terms of laterality of the uveal melanoma and major site of mobile phone use? Third, if there is an association, is there a dose-response relationship between RFR and uveal melanoma risk? Another set of etiologic question relates to pigmentation characteristics including iris colour, hair colour, tendency of the skin to burn and to tan, freckling, number of cutaneous nevi. A further study questions relates to exposure to work and leisure time related ambient ultraviolet radiation and uveal melanoma risk. In addition, our study focuses on several occupational exposures or jobs that are suspected to be associated with an increased risk of uveal melanoma [23,15] including working in the chemical industry, farming, coal mining, welding, cooking, working in the health service sector etc. Finally, we focus on the association between cancer history of the index persons and their relatives (especially breast cancer) and risk of uveal melanoma. Case recruitment The case recruitment is hospital-based and takes place in the Division of Ophthalmology, University of Essen which is a the referral centre for eye cancer in Germany, currently treating about 400-500 eye cancer patients per year. Eligible uveal melanoma cases have to fulfil several criteria. Patients with newly diagnosed first uveal melanoma located in the choroid, iris, and/or ciliary body [24] during the recruitment period from September 25 th , 2002 to September 24 th , 2004 are eligible, if they are referred to the Division of Ophthalmology, University of Duisburg-Essen during the recruitment period, are aged 20-74 years at diagnosis, are living in Germany, and are capable to complete the interview in German. The majority (about 70-80%) of uveal melanoma treated at the University Hospital Essen receive episcleral plaque therapy without histological verification. For this reason we did not include a reference pathologist who reviews the diagnostic certainty of the cases. Experiences from our previous case-control studies showed that there is a nearly perfect agreement between the local eye doctors in the reference centre and the international reference pathologist (Dr. Ian Cree, London) [22]. We considered a diagnosis of uveal melanoma as definite if the results of the clinical examination of the eye (ophthalmoscopy) and ultrasound (sometimes supplemented by fluorescence angiography, computertomography or magnetic resonance imaging) were unambiguous. The inclusion and exclusion criteria of the cases are listed in table 1. Control recruitment Interim analyses showed that the majority of cases comes from the territory of former West Germany. Figure 1 displays the geographic distribution of cases treated for uveal melanoma at the Division of Ophthalmology, University of Duisburg-Essen, from September 24 th , 2002 through March 31 th , 2004. Assuming comparable incidences of uveal melanoma in the federal states of Germany, the crude rate (referred cases divided by population at risk aged 20-74 years) may be considered as an indicator of the referral effect. Obviously, the referral effect varies by federal states. However, it is difficult to judge whether the case referral to the Division of Ophthalmology, University of Duisburg-Essen is a random sample of all newly diagnosed uveal melanoma cases in Germany. We therefore decided to recruit three different control groups. First, if we assume that cases treated in Essen are a random sample of all cases in Germany, a population-based control group would be the most appropriate control group. For this approach, we randomly select controls from mandatory lists of residence that cover the total population of the city or local district. These lists are regarded as the most complete sampling frame for population-based studies in Germany. Second, if the referred cases are not a random sample of all newly diagnosed cases of uveal melanoma in Germany, a control group sampled from those ophthalmologists who referred cases to the Division of Ophthalmology, University of Duisburg-Essen, would be most appropriate [25,26]. To increase the statistical power, we decided to include two controls per case. In addition, we recruit sibling controls of cases (matching ratio 1:1) in order to assess whether genetic factors may confound the effect of exposure. The sibling controls are matched in genetic background. The inclusion criteria of the three control groups are presented in table 2. Power calculations Based on our former uveal melanoma case-control studies [15,22] we estimated to identify 480 eligible cases within a recruitment period of 24 month. With an anticipated response proportion of about 80%, we expect to interview 380 cases overall. Population-based prevalence estimates of mobile phone use in the general population are scant. A recent telephone survey from 2001 showed that 82% of male and 74% of female participants aged 14-44 years use mobile phones; within the age group 45-59 years, 63% of male and 58% of female participants use mobile phones. The oldest age group (> = 60 years) shows considerable lower prevalences of mobile phone use (men: 49%, women: 25%) Being capable to complete the interview in German 1) Also patients with uveal melanoma who were never referred to Division of Ophthalmology, University of Duisburg-Essen, for diagnostics. 2) Reference date for calculation of age in cases in the first visit at Division of Ophthalmology, University of Duisburg-Essen due to uveal melanoma, as long as there is no clue that the uveal melanoma was diagnosed more than three month earlier. [27]. To determine the statistical power of the case-control study, we assumed several mobile phone prevalence estimates in the control group. We conducted all power calculations two-sided according to formulas of Woodward [28]. We chose α to be 5% and 1-β to be 90%. Detectable increases of odds ratio estimates by varying prevalences of mobile phone use in the control group are presented in table 3. A case-control interview ratio of 380 to 760 would enable us to detect increased odds ratios in the range of 1.5 to 2.2 depending on the exposure prevalence in the control group (table 3). Table 4 presents a list of exposures that are assessed in the RIFA study. Exposure assessment The questionnaire on mobile phone use is the same instrument which has been used by the international casecontrol study on brain cancer and mobile phone use sponsored by the International Agency for Research on Cancer, called Interphone study [29]. In contrast to the Interphone study, we do not perform personal interviews but telephone interviews. For this reason, we cannot show Compared to other studies on the aetiology of uveal melanoma, the RIFA study uses a detailed assessment of pigmentary characteristics. The self-administered questionnaire contains a eye and hair colour card that allows the participants to choose the most appropriate colour of their eyes and hairs at age 20 years. During the telephone interview, the skin reaction to sun exposure (tanning ability, burning tendency) is asked according the concept of Fitzpatrick [30]. Fitzpatrick's original question contains both items (tanning ability, burning tendency) within a single question which is of methodological concern because the categorical answers given by the Fitzpatrick question may not presents all types of skin reactions as has been demonstrated by Rampen et al. [31]. To reduce this potential misclassification, we separated the Fitzpatrick items into two questions that separately ask about burning tendency and tanning ability. For the assessment of nevi of the upper arms and the dorsum of the feet with a diameter of at least 3 mm, participants receive a template with a 3 mm hole that enables them to count all nevi of this minimum size. In addition, the CATÍ (computer assisted telephone interview) 1) The reference date is the date of the case diagnosis 2) With small ophthalmologist's practices it may become necessary to increase the span of ten working days in order to recruit ten eligible patients. 3) As residence matching is not performed at the level of single cities, controls will be taken from a sample in a city comparable size within a radius of 60 km around the case's habitation, if such a sample exists. Calculations according to Woodward; α = 0.05, β = 0.10; two-sided includes a detailed history of sunburns in the recent 15 years before interview and tendency to freckle as a child. The CATI contains a question on eye colour with the typical categorical answers as has been used by several others (blue, grey, green, hazel, brown, black, [22] that will enable us to study the agreement between eye colour assessment by colour cards and colour categories. The course of the exposure assessment is displayed in figure 2. The exposure assessment starts with a self-administered questionnaire among subjects who agreed to participate and gave written informed consent. Subjects are chronologically asked about each job held for at least six months and included questions on the job tasks and industries as has been done in previous studies on the aetiology of uveal melanoma [22]. In addition, subjects are asked questions related to eye colour, hair colour, ever use of mobile phone, wireless telephone and radio set, and number of nevi. Subjects who have sent back the selfadministered questionnaire undergo a computer-assisted telephone interview which takes about 30-40 minutes. For subjects who reported selected work tasks (e.g. cooking and food processing, welding and others), we use 16 job-specific supplementary questionnaires to obtain details of the job tasks and materials used. Quality assurance The quality control program includes several procedures. The study is designed to fulfill the recommendations of the German Good Epidemiologic Practises [32]. Eight month before the main study started, a manual of standard operating procedures (MOP) was written and a pilot study of four weeks was conducted to test the field work and exposure questionnaires. After some minor revisions of the MOP, report forms, and questionnaire instruments, the principal investigator and participating epidemiologists had to sign that they fully agree with the final version of the MOP. Interviewers of the study were introduced into the field work and were blinded against our study hypotheses. After an initial interviewer training course, interviewers are regularly monitored and receive regular training courses. The recruitment progress, given as number of registered cases and controls, distribution of inclusion and exclusion criteria, response proportions, is monitored monthly. The analysis of nonresponse reasons is supplemented by an short questionnaire for subjects not willing to participate. This questionnaire includes few demographic and exposure items that help us to assess potential selection effects due to nonresponse. A plausibility control of the interview data is done quarterly and is the basis for the regular training courses of the interviewers. The completeness of case registration is checked by regular comparison of the list of registered cases with lists of admissions to the referral centre. In addition we compare our list of cases with data of the hospital information system that includes information on diagnoses. The self-administered short questionnaires are visually edited by the study personnel before the telephone interview starts. The visual editing includes a completeness check and coding of the life-long job history. For each job period, the occupation and branch of industry is coded according to ISCO-68 [33] and NACE 1993 [34]. These classifications have been repeatedly used in occupational case-control studies. Self-administered questionnaires with incomplete information or missing data are marked and questions are prepared for the telephone interviewer who is responsible to ask these questions before the main telephone interview starts. The CATI contains internal quality checks that prevent data entry errors. For example, interviewers are not able to fill in the detailed questions on mobile phones, if the entry question on ever having used mobile phone has been answered with no. Planned analyses At the end of the field phase, the data are quality-checked. To estimate the effect of exposures on uveal melanoma risk, we will use conditional logistic regression that accounts for the matching factors and allows to control for potential confounding. We will classify people exposed to an occupational category if they ever worked within this category for at least six months. The quantification of mobile phone use will be based on average number of phone calls and average duration of phone calls per time unit. The association between pigmentary characteristics and uveal melanoma risk will be assessed by detailed matrix containing information on hair colour at age 20 years, eye colour, freckling tendency, and skin colour. Final results of these analyses are scheduled to be published in summer 2005.
2014-10-01T00:00:00.000Z
2004-08-19T00:00:00.000
{ "year": 2004, "sha1": "c7b5561de9d454e34fd2fa724c2e34d8929584ea", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1471-2415-4-11", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32d1f1f6512ca76b2867c6a28ebe8393532e9f36", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13869558
pes2o/s2orc
v3-fos-license
Changes in the Attitudes of Y Generation Members towards Participation in the Activities of Municipalities in the Years 2008-2017 . The presented research results cover a comparative analysis of cyclical surveys about attitude to social participation conducted among students (members of Y generation) of Wroclaw's economic universities in 2008, 2013 and 2017. The basic purpose of the study is to compare the attitudes to social participation of students at similar stages of life (students) and at similar age over the years 2008-2017. The second aim is to answer the question whether these attitudes have changed with the introduction of active participation of citizens through the use of the participatory budget. The survey included 40 problematic questions. In the majority of them a five-step Likert scale was applied. The results of the survey show that the dating analysed period surveyed groups of questioned young Poles had very low interest in gmina financial activity but it was slightly rising. The feeling of real influence on local issues among the respondents is slowly rising what may result, among other things, from the fact that the processes of participatory budget in municipalities became widespread. Additionally, the percentage of people who do not feel the need to increase their impact on municipal affairs continues to raise what can be caused by existing more satisfactory possibilities for citizens to participate in the activities of municipalities. Introduction The notion of the citizens' participation in the state activities is an ambiguous one. One should refer here to one of the basic means of defining the essence of social participation proposed by Sherry R. Arnstein in 1969 [1]. According to this concept, participation refers to the influence of the "minority" (i.e., part of the local community) on management decisions. The typology of the kinds of participation was hierarchically structured along with the increasing strength of stakeholders' decisionmaking capacity. The lowest levels of the "ladder" -manipulation and therapy do not constitute the real participation since they aim at shaping the stakeholders` attitudes by managers (they are merely an "illusion of participation" -they can take the form of consultative groups or discussion panels moderated by managers and propagating their ideas). Further levels of participation -information, consultation and mitigation are the substitute of proper participation, since obtaining information on the tasks being carried out is not followed by the possibility of people to have an influence on their form (informing) or despite listening to stakeholders, collecting surveys, carrying out other studies, the authorities do not take any actions aiming at realization of the collected suggestions (consultations) or there is a lack of possibility to influence the representatives of stakeholders participating in the planning and implementation of the tasks on the actual activities of the authorities. The above three levels of participation represent only a "safety valve", creating an illusion that government deals with issues reported by residents. Whilst the actual participation will take the form, first of all, of partnership -where in the process of negotiation and co-responsibility stakeholders are able to influence the decisions of the authorities; secondly, it will take the form of delegated power -where stakeholders will primarily decide on the shape of the particular project; thirdly, in extreme form, participation will mean taking control of a part of the management activities in a relevant and important area. The goals of social participation are, for example,: firstly, to inform and educate the society; secondly, to involve in the decision-making process of the managing bodies the values, suggestions and preferences of the society, thirdly, to increase the substantive character of the decisions; fourthly, to increase the confidence in the authorities; fifthly, to defuse conflicts between the negotiating parties (stakeholder groups, governing bodies and stakeholders) and sixthly, to improve the costeffectiveness of making decisions [5]. It should be noted that the sixth goal is a measure of the legitimacy of the particular forms of social participation. The breakdown of costs of the individual types of participation with achievable or expected effects often gives the opportunity to choose a more profitable form of participation. 2 The purpose and research method The presented research results cover a comparative analysis of cyclical surveys about attitude to social participation conducted among students of Wroclaw's economic universities in 2008, 2013 and 2017 (unpublished yet). The findings of these examinations were published by the authors in separate papers [3,4], however, no comparative analysis has been performed. Thus, the basic purpose of the study is to conduct comparative analysis of the attitudes to social participation of members of this generation at similar stages of life (students) and at similar age over the years 2008-2017. The second aim is to answer the question whether these attitudes have changed with the introduction of active participation of citizens through the use of the participatory budget. It should be noted that, as a result of the deliberate selection, the conclusions of the comparative analysis may only be considered as an introduction to the discussion on the subject matter raised in the study. Surveys were carried out among students of two biggest economy schools in Wrocław (in 2008 year -288 surveys, in 2013 year -142 and in 2017 -208 surveys were completed). The survey included 40 problematic questions (in 2017 it had 20 additional questions). In the majority of them a five-step Likert scale was applied. 30 questions referred to issues connected with a gmina inhabited by these students. The next 8 referred to an ideal gmina. The questions can be divided into several categories: about satisfaction about living in this gmina, knowledge of problems this gmina faces, about participation in local affairs and activities, about possibility and will to influence local finances, investments and services offered by this gmina, about information policy concerning investments, finances and budget. Forms and scale of citizens' participation Nowadays, in most of developed countries there are various forms of citizens' participation in the activities of the authorities. The solutions which function in Poland, however, can be considered as very limited. The basic form regulated by the law is information (through Public Information Bulletins, websites and the access to public information) and consulting processes resulting from e.g. the Act on Municipalities. Since 2012, we can observe more possibilities of participation of citizens in creating the activities of municipalities as a result of the introduction of the participative budgets by the local governments. In Poland, these procedures are usually connected with the "good will" of the authorities of individual local governments unitsmainly due to the continuous lack of systemic solutions included in normative acts. However, considering the placement of this form of participation discussed above in the participation ladder, it should be noted that the features of participative budget presented in literatureespecially the real impact (though usually poor in quality) of stakeholders on the direction and form of public expenditure place this form of participation at least at the level of partnership. The analysis of the projects implemented by the local government units referred to in the documents as a civic or participatory budget [2] indicates that they most often relate to the functionality or improvement of life quality of members of the local communities by realization of various investment tasks. In Poland, participatory budget is a subject of research conducted by various authors. However, this subject has not yet been analyzed profoundly. From the point of view of the analyzed issue of participatory budget researches in Poland have been carried out in the area of legal aspects and public consultations [9,14] and the meaning and procedures of social participation [7,12], however, they are mainly of pilot or review nature. The desire to cooperate, to consociate, to work for common good, to work for othersthese are important features of civil society which serve both to build positive social relationships and in addition, to raise the level of social capital, whose role in the socioeconomic development of the country is more and more often emphasized in the literature [11]. The profile of the state of civil society in Poland is one of the elements of the cyclically-published report: "Diagnoza społeczna. Warunki i jakość życia Polaków" (eng. Social Diagnosis. Conditions and quality of life of Poles). The 2015 report [6] characterizes among other attitudes of Poles towards the common good. One of the examined issues was civil experience and competence. The places of acquisition of civic experiences and skills are voluntary organizations, activities and contacts which fill the space between the individual being and the society and between the citizen and the state [6]. The simplest measure of the state of civil society is the degree of association, the percentage of citizens who belong to voluntary organizations. In Poland in 2015, only 13.4% of respondents belonged to any organizations, associations, parties, committees, councils, religious groups, unions or circles. The analysis of the percentage of people belonging to different sociodemographic groups in 2011 and 2015 again presents the lowest share of the youngest age groups (respectively in 2011: to 24, 13.5%, 25 to 34, 10.3% and in 2015: up to 24 years old, 10.7%, from 25 to 34 years old, 11.8%). All other age groups showed more activity in the study area. Acts on behalf of one's own community, often undertaken individually, without a formal association, remain a separate matter. Social Diagnosis indicates that this activity is just as unpopular as being affiliated to an organization. In the surveyed period, only 15.4% of the respondents were involved in activities for the benefit of the local community -the municipality, housing estates or the localities in the nearest neighbourhood. The index of social experiences and civic actions (This is an aggregate measure of social and civic experiences related to 1) voting in local elections 2) activity for the good of society 3) participation in meetings 4) work for other people or social organizations, 5) performing functions in the organization) adopted in the "Social Diagnosis" indicates a very limited range of these experiences. Poles are relatively rarely involved in activities in organizations, participation in grassroots social initiatives, public meetings or volunteering. The presented results indicate that the state of civil society and social attitudes in Poland is unsatisfactory. The youngest generation (Y generation) is characterized by the lowest level of vulnerability to violation of the public goods. Due to the prospective importance of the millennium generation for shaping the future social life and the potential of having skills and attitudes different from the previous generations, it seems justified to undertake research on the changes in attitudes of young Poles towards the idea of social participation. Characteristics of Y generation The ability to participate actively in shaping the activities of the municipality refers to all its inhabitants but, as the "Social Diagnosis" states, not everyone expresses the same desire and willingness to undertake such activities. Particular attention should be paid to the dedication to social participation of the young generation, which on the one hand possesses significant potential in the form of modern knowledge, entrepreneurial attitudes, and openness to innovative solutions, and on the other hand it is commonly associated with attaining self-interest rather than social one. Such opinions derive from the characteristics describing the behavior and traits of the young generation. Literature of the subject indicates particular differences between generations present in Poland and in the world which translate into different professional, economic and social behaviors of these groups. Four generations are most often indicated: • Veterans ( • growing up in the free market conditions, • contact with new technology, whose intensive development has accompanied the development of generation Y, • increasing standard of living and consumption, • greater choice of education and career path, • greater mobility and openness -easier travelling and contact with other cultures (also through the Internet and language skills), • an excellent knowledge of the new technologies -quick acquisition of needed information, creating virtual communities, but often difficulties in direct interpersonal contacts, • fast pace of life -change as a normal state, the ability to communicate and move quickly, do several things at once but also impatience and the desire to have everything immediately, • change of approach to one's own life -greater individualism, self-reliance, high self-esteem and the desire for self-realization [11]. The characteristics of the Y generation taking into account the demographic, historical, technological and social dimensions, have been presented in Table 1. Table 1. Characteristic of Y generation. Characteristics Description Demographic dimension In Poland the Y generation comprises over 11 million people, which constitutes one quarter of the population. The situation is similar in other key regions of the world. Massive generation which is significant for shaping the future social realities. Historical dimension The generation growing up after systemic transformation having no experience with the communist system and command and control economy which hinders the understanding of older generations. The first generation creating their consciousness in Europe without territorial divisions and convinced that living in another country may be easier but, on the other hand, proud of its origin and faithful to local traditions. Technological dimension The generation brought up in the age of technological and information revolution. An excellent knowledge of modern technologies and intensive use of them in a private and professional life. Virtual reality is a complement to the real world, and full participation in the social world requires parallel presence in both worlds. Rich Internet resources and ease of use are conducive to creating millennials` illusion of continuous access to knowledge and competence and, as a result, of rapid self-resolution. Social dimension The priorities of the Polish Y generation are: having a wide circle of friends and acquaintances, health, fame and material success. Compared to older generations, millennials have more friends (on average more than 40) and a larger network of acquaintances (on average more than 200) with which they maintain constant contacts. Polish millennials are more sensitive to economic stimuli than their western counterparts living in the conditions of the economic prosperity and stable economic situation. Y generation is convinced that life success is a consequence of diligence and the acquisition of the necessary competences which means continuous improvement and participation in various types of courses. It should be noted that in the case of Y generation there is a large age span between its members. Therefore, it appears to be a good basis for undertaking research into the evolution of attitudes of members of this generation at similar stages of life (studies) and at similar ages over the years. Taking into consideration the characteristics of Y generation, its age range and the role it will play in shaping the future socio-political realities, the issue of how millennials are willing to contribute to shaping the activities of the communities in which they live seems to be important. The next part of the article will discuss this topic. Results and discussion The primary concern for reflecting the level of awareness of being a member of a local community and feeling connected with it is the participation of the respondents in local elections. The comparison of the 2008, 2013 and 2017 survey results shows a gradual decrease in the proportion of people participating in these elections in the following years (question 5, fig. 1). This signals a potential decrease in citizens' interest in participation in municipal activities. At the same time, this constitutes a reflection of general tendency in the Polish society. It means a decreasing interest in public affairs, which may result from many factors, both related to the characteristics of Y generation and stemming from external factors. While comparing the subsequent groups of respondents, it should be noted that low feeling of real influence on local issues among the respondents was observable (from 13 to 18% of them stated that they have an impact - fig. 2) -but in 2017 it was higher than in previous years. This may result, among other things, from the fact that the processes of participatory budget in municipalities became widespread. On the other hand, researches show ( fig. 3) that the percentage of people who do not feel the need to increase their impact on municipal affairs continues to raise (from 10.7% in 2008 to 29% in 2017). This may partially stem from the fact that current possibilities for citizens to participate in the activities of municipalities are becoming satisfactory, and partially that the respondents' interest in local affairs diminishes. The survey results clearly show a negative trend -the interest in expenditure directions and investment activities of municipalities is decreasing. Between 2008 and 2017 it was a decline of 9 to 10 pp (questions 9 and 10 - fig. 4 and 5). As it can be observed, less than one third of the respondents were interested in how municipalities distribute public money. On the other hand, the confidence in the effectiveness of consultative processes introduced by local governments grew slowly from about 11% in 2008 to more than 16% in 2017 ( fig. 6). Nevertheless, still more than half of the respondents pointed out the lack of effective communication procedures with officials in this regard. The last issue which is worth mentioning is the level of social activity of the respondents. As it can be seen, the majority of the respondents basically do not No participate in processes of public consultation conducted by municipalities. Naturally, there was more than twofold increase in the percentage of respondents declaring participation in consultations (from 6.7 to 13.9% of respondents), but this is still a very poor result. The level of participation is in this case similar to the level of participation of respondents in the activities of municipal social organizationsapproximately double increase as well (from 5.4 to 11.1%). Conclusion The results of the survey show that the dating analysed period surveyed groups of questioned young Poles had very little interest in gmina financial activity but it was slightly rising despite a gradual decrease in the proportion of people participating in municipal elections in the 2008, 2013 and 2017. The feeling of real influence on local issues among the respondents is slowly rising what may result, among other things, from the fact that the processes of participatory budget in municipalities became widespread. Additionally, the percentage of people who do not feel the need to increase their impact on municipal affairs continues to raise what can be caused by existing more satisfactory possibilities for citizens to participate in the activities of municipalities. The other thing is general attitude of Y generation to citizen (social) participation -millennials are not willing to contribute to shaping the activities of the communities and they generally are not interested how municipalities distribute public money.
2018-05-06T02:37:42.380Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "5c233c155be059dd1697f069b759c39af2a51b92", "oa_license": "CCBY", "oa_url": "http://digilib.uhk.cz/bitstream/20.500.12603/370/1/Bednarska_HED18_paper_17.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f1e29a7cdea126752a6c35f2d6efd3856113c6c0", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
42176430
pes2o/s2orc
v3-fos-license
An Extension of Godunov SPH II: Application to Elastic Dynamics Godunov Smoothed Particle Hydrodynamics (Godunov SPH) method is a computational fluid dynamics method that utilizes a Riemann solver and achieves the second-order accuracy in space. In this paper, we extend the Godunov SPH method to elastic dynamics by incorporating deviatoric stress tensor that represents the stress for shear deformation or anisotropic compression. Analogously to the formulation of the original Godunov SPH method, we formulate the equation of motion, the equation of energy, and the time evolution equation of deviatoric stress tensor so that the resulting discretized system achieves the second-order accuracy in space. The standard SPH method tends to suffer from the tensile instability that results in unphysical clustering of particles especially in tension-dominated region. We find that the tensile instability can be suppressed by selecting appropriate interpolation for density distribution in the equation of motion for the Godunov SPH method even in the case of elastic dynamics. Several test calculations for elastic dynamics are performed, and the accuracy and versatility of the present method are shown. Introduction Smoothed Particle Hydrodynamics (SPH) is one of the computational fluid dynamics methods using particles that mimic fluid elements (e.g. [1], [2], [3]). Recently the standard SPH method, i.e., the most popular form of SPH method, is developed to elastic dynamics and applied to calculations of planetesimal col-5 lisions (e.g. [4], [5]). The SPH method does not require a Eulerian mesh. Thus it is favourable for simulations with large deformation, and we can easily track information accompanying to particles such as clack history. Therefore, the SPH method is suited for calculations of disruptive collisions. However, the standard SPH method for elastic dynamics has a serious prob- 10 lem that results in unphysical clustering of particles especially in tension-dominated region. This problem is called the tensile instability [6]. The property of the tensile instability for the case of the Nyquist wavelength is analyzed in [7] for hydrodynamics, and in [8] for magnetohydrodynamics. The tensile instability occurs also in positive pressure region that represents compressed material or 15 usual fluid. According to [9], B-spline kernels produce the tensile instability even in the positive pressure regime if the number of neighbor particles is too large. The simple test calculation of oscillating plate in [10] demonstrates that the standard SPH method suffers from unphysical fracture caused by the tensile instability. Thus the mitigation of the tensile instability is required when we 20 use the SPH method for elastic dynamics. For example, in [13] and [10], Monaghan and Gray et al. introduce artificial stress term that provides a strong repulsive force only when particles become too close to each other, and try to prevent the tensile instability. They conducted the 25 linear stability analysis and found that this method suppresses the instability at short wavelengths and does not strongly affect the perturbations of long wavelengths. However, this method includes the artificial stress term that does not exist in the original equations. Moreover, according to [14], this method does not seem to suppress the tensile instability in simulations of hypervelocity 30 impacts. Sugiura and Inutsuka [15] mitigate the tensile instability using the Godunov SPH method [16] that utilizes a Riemann solver and achieves the second-order accuracy in space. They conduct the linear stability analysis for the equations of the Godunov SPH method, and find that the tensile instability can be suppressed by selecting appropriate interpolation for V 2 ij (i.e., weighted 35 average of ρ −2 ) depending on the sign of pressure. However, they conduct the linear stability analysis only for the equations of hydrodynamics and it is not obvious that their approach works for those of elastic dynamics that uses the deviatoric stress tensor. The accuracy of the standard SPH method is below the first-order in the 40 case of disordered particle distribution. This means very slow convergence for the increase of spatial resolution. For example Genda et al. [17] conducted simulations of planetesimal collision using the standard SPH method, and evaluated critical kinetic energy Q * D , which is required to disrupt planetesimals while increasing the number of particles. As a result, they found that at least 45 five million particles are required to obtain converged Q * D , and convergence is the first order with respect to mean particle spacing. They claim that this first-order convergence is due to the effect of shock waves because the spatial accuracy of physical quantities becomes the first order at shock surface. The Godunov SPH method can resolve shock surface with much small number of 50 particles thanks to the utilization of the Riemann solver, and thus much fast convergence is expected. In this study, we extend the Godunov SPH method, which can achieve the second-order accuracy in space, to elastic dynamics. The equation of motion and the equation of energy for elastic dynamics include deviatoric stress tensor. 55 We formulate the equation of motion, the equation of energy, and the evolution equation of deviatoric stress tensor itself so that formulated equations can achieve the second-order accuracy in space. Moreover, we develop a method to treat the Riemann solver for general equation of state (hereafter, EoS) for elastic dynamics, and enable calculations of elastic dynamics using the Godunov SPH 60 method. We perform several test calculations of elastic dynamics, and show that even in elastic dynamics the tensile instability can be suppressed just by selecting appropriate interpolation for V 2 ij depending on the sign of pressure. The structure of this paper is as follows: in Section 2 we extend the Godunov SPH method to elastic dynamics. The detailed method for the implementation is described in Section 3, which includes the treatment of the Riemann solver for non-ideal gas EoS or the method to mitigate the tensile instability. In Section 4 we perform several test calculations of elastic dynamics. Section 5 is for summary. Godunov SPH method for elastic dynamics 70 In this section, we introduce fundamental equations for elastic dynamics and formulate the Godunov SPH method for these equations to achieve the second-order accuracy in space. Fundamental equations for elastic dynamics Fundamental equations for elastic dynamics can be found e.g., in [4]. The 75 equation of continuity is, where d/dt means Lagrangian time derivative, ρ is the density, v α is the α-th component of the velocity v, and x α is the α-th component of the position r. We also assume the summation rule over repeated indices of Greek letter. Hereafter, a superscript of Greek letter means component of vector or tensor, a 80 subscript of Roman letter means particle number. The equation of motion is, where σ αβ is the stress tensor. The stress tensor can be decomposed to pressure P that represents the diagonal part and deviatoric stress tensor S αβ that corresponds to the non-diagonal part, where δ αβ is Kronecker delta. P can be expressed by appropriate EoS for the solid. The equation of energy is, where u is the specific internal energy,ǫ αβ is the strain rate tensor, σ αβ is a symmetric tensor. Thus Eq. (4) can be expressed by simpler form as, In addition to these equations, a equation that determines the deviatoric stress tensor S αβ is necessary. We use the time evolution equation of the deviatoric stress tensor that assumes Hook's law, where µ is the shear modulus, R αβ is the rotational rate tensor, If we use the EoS P = P (ρ, u), we can describe the motion of elastic body. 95 Equations for Godunov SPH method In the SPH method, we define the density at arbitrary position r as, where W (r, h) is a kernel function and h is a parameter called the smoothing length. In Section 2, we treat this smoothing length as constant in space. The kernel function has various forms. Throughout this paper, we use Gaussian 100 kernel, where d represents the number of dimensions. The equation of motion and the equation of energy for the Godunov SPH method are defined by the convolution of Eq. (2) and Eq. (6) respectively. The acceleration of the i-th particle is expressed as, (11) where the overdot represents time derivative. Similarly, time derivative of the internal energy of the i-th particle is, We can formulate the equation of motion (11) in almost the same way as for hydrodynamics in [16]. What we should do is just replacing −P (r) in [16] with σ αβ (r). Finally the equation of motion for the Godunov SPH method for 110 elastic dynamics becomes, where, P * ij is resultant pressure of the Riemann problem that uses the physical quantities of the i-th and j-th particles as initial condition, and If we define the s-axis, which is along r i − r j and has its origin at (r i + r j )/2, 115 and expand ρ −2 (r) linearly in the direction perpendicular to the s-axis, V 2 ij (h) and s * ij become simpler form, Equation (18) is also written in [15]. To calculate V 2 ij (h) and s * ij , we need to interpolate 1/ρ(s) along s-axis. In this paper we use linear interpolation and cubic spline interpolation. The formula of V 2 ij (h) and s * ij in the case of linear 120 interpolation and cubic spline interpolation are written in [16]. Note that V 2 ij (h) is also a function of smoothing length. If we use cubic spline interpolation when the particles become much closer to each other than the smoothing length, V 2 ij diverges due to the interpolation. V 2 ij is originally weighted average of 1/ρ 2 (r). Thus its value should be about 125 1/ρ 2 (r). Therefore, if V 2 ij calculated by cubic spline interpolation is much larger than 1/ρ 2 (r), we should use linear interpolation. In this study, we use linear interpolation when V 2 ij becomes larger than V 2 ij,crit , where ρ ij = (ρ i + ρ j )/2. As we use the result of Riemann problem for P * ij , we can use the result of the Riemann problem in elastic dynamics for S αβ * ij . However, in the Godunov method we utilize the Riemann solver to describe the shock wave accurately, and for this purpose it is enough to use the result of Riemann problem for pressure. Thus we use simple weighted average of deviatoric stress tensor expressed in Eq. (16) for S αβ * ij . 135 We can also transform the equation of energy in almost the same way as in [16]. Finally the equation of energy becomes, where we use time centered velocity for v α * i to achieve the conservation of total where ∆t is the time step. The reason why the total energy is conserved is 140 written in [16] in the case of hydrodynamics. For the same reason, the total energy can be conserved exactly in our formulation. In [16], Inutsuka uses the result of Riemann problem for v α * ij , but this treatment can cause a problem if the EoS is not for ideal gas. In the case of positive pressure, resultant velocity of the Riemann problem causes effective energy transfer from high-pressure 145 particle to low-pressure particle. For example, in the case of collision between aluminum sphere and plate (test calculation in Section 4.4), collisional surface becomes contact discontinuity. The pressure should be constant across contact discontinuity, but SPH calculation makes "pressure wiggle" at contact discontinuity due to discretization error. If the EoS is for ideal gas, energy transfer 150 stops when the pressure becomes constant even when pressure wiggle exists. However, stiffened gas EoS (e.g. [18]) or Tillotson EoS (e.g. [19]) has terms that represent the elastic body such as P = C 2 s (ρ − ρ 0 ). Thus if the "density wiggle" exists the pressure wiggle also exists irrespective of the internal energy, and energy can be transferred from high-pressure particle continuously. Eventually the 155 internal energy of high-pressure particle becomes largely negative even though this particle is located in a compressed region. To prevent this problem, in this study we use simple average value for v α * ij expressed as, and the result of Riemann problem is used only for pressure. Finally, we formulate the time evolution equation of deviatoric stress tensor. 160 Following the formulation of the induction equation in [20], we formulate the time derivative of S αβ /ρ. We simply differentiate S αβ /ρ and obtain, d dt Substituting Eqs. (1) and (7) into Eq. (24), we can obtain, Note that (∂/∂x α )v α =ǫ γγ . As with the equation of motion or the equation of energy, we define the time derivative of S αβ /ρ of the i-th particle as the 165 convolution of Eq. (25). This equation includes the following terms (Note thatǫ αβ and R αβ are the sums of velocity gradient): Using Eq. (9), the identity j mj ρ(r) W (r − r j , h) = 1, ∂v α i ∂x β = 0 and the partial integration, we can transform Eq. (27) to, Finally, we calculate the integral using interpolation as in [16], and Eq. (30) becomes, We can transform Eq. (26) using Eqs. (29) and (31), the time derivative of S αβ /ρ of the i-th particle becomes, where, In actual calculation, we follow the time evolution of S αβ /ρ using Eq. (32), and then we can obtain S αβ i at each time step using, Our formulation of the equation of motion, the equation of energy and the time evolution equation of deviatoric stress tensor essentially follows [16]. Therefore, these equations are expected to achieve the second-order accuracy. We confirm this fact in the convergence test in Section 4.1. The density can be calculated by Eq. (9). However, it is known that this 185 equation causes a problem in a surface of solid body. Density calculated by Eq. (9) becomes small nearby the free surface, and pressure also becomes small via EoS. Thus the solid body tend to be deformed by unphysical gradient of pressure nearby a free surface [21]. We can prevent this problem by calculating the time evolution of the density using the equation of continuity. In this study, 190 we use simple Lagrangian derivative of Eq. (9) as the equation of continuity, Eq. (36) is used, e.g., in [4]. Linear momentum is conserved exactly in our method because the equation of motion (13) is written in the anti-symmetric form. However, as is usually the case with SPH methods for elastic dynamics or magnetohydrodynamics, angular 195 momentum of our method is not conserved exactly in our method because of the existence of non-central forces. This problem is stated in [22], and [23] proposed modification of the gradient of the kernel function to recover angular momentum conservation. This aspect will be studied in our next paper. 200 In this section, we describe detailed implementation of our Godunov SPH method for elastic dynamics. In Section 3.1, the method to use the Riemann solver for non-ideal gas EoS is described. In Section 3.2, we explain the mitigation of the tensile instability in our formulation. In Section 3.3, we explain how to use the variable smoothing length. The Riemann solver is a method to solve the Riemann problem (the shock tube problem). In the Godunov scheme, we can describe the shock wave accurately using the Riemann solver. We have semi-analytic formula of the Riemann solver in the case of ideal gas EoS or simple EoS for elastic body 210 (P = C 2 s (ρ − ρ 0 )), and we can solve it using iteration. The Riemann solver for ideal gas EoS is introduced in [24], and for EoS of elastic body is written in [15]. However, general EoS such as Tillotson EoS is complicated in contrast to that for ideal gas or elastic body. At present analytical solutions of the Riemann problems for such EoS are not available. The Riemann solver is a tool to treat 215 the shock wave, and we do not necessarily use the analytical solution. Therefore, in this study, we propose the method to obtain numerical solutions of the Riemann problems for general EoS. The EoS that represents solids such as Tillotson EoS or stiffened gas EoS behaves like elastic body at low temperature and like ideal gas at very high 220 temperature because of sublimation. Therefore, it is expected that we may use the Riemann solver for EoS of elastic body at low temperature, and that for ideal gas EoS at high temperature. First, we consider the case that EoS behaves like ideal gas at high temperature. The specific heat ratio γ is a good indicator to measure the property 225 of ideal gas. In adiabatic change, polytropic relation P = Kρ γ holds, and the specific heat ratio shows the power of the density. Similarly we can evaluate effective specific heat ratio γ eff for general EoS by calculating the exponent of the density, where we can express du/dρ using the first law of thermodynamics du = −P dV = 230 (P/ρ 2 )dρ as, We can calculate the formula of ∂P/∂ρ and ∂P/∂u easily once EoS is obtained. We solve the Riemann solver at high temperature by approximating it as the Riemann solver for ideal gas with the specific heat ratio of where γ eff,L is effective specific heat ratio of left hand side of the Riemann 235 problem, γ eff,R is that of right hand side. Hereafter, subscript of L denotes the value of left hand side of the Riemann problem, and R denotes that of right hand side. It is assumed that this approximation is valid when γ eff,L and γ eff,R are comparable, because in that case this EoS behaves like ideal gas EoS locally, but becomes poor when γ eff,L and γ eff,R are largely different. 240 Next, we consider the case that EoS behaves like elastic body at low temperature. We can describe EoS of elastic body P = C 2 s (ρ − ρ 0 ) once we determine the bulk sound speed C s and the reference density ρ 0 . We approximate the bulk sound speed as, We can express the reference density using C s as ρ 0 = ρ − P/C 2 s in the case of 245 EoS of elastic body. Thus we approximate ρ 0 used for the Riemann solver as, Using Eqs. (40) and (41) to the Riemann solver for EoS of elastic body, we can approximately obtain the result of Riemann problem at low temperature. In the Godunov SPH method, we use the resultant pressure of Riemann problem for P * ij , which is defined for each pair of particle i and j. When we calculate P * ij , we use physical quantities of the i-th and j-th particle for the values of left and right hand side of the Riemann problem. Thus the values with subscript of L or R in Eqs. (39), (40) and (41) are variables depending on particles, and γ, C s and ρ 0 are the appropriate values that are valid nearby each pair of the i-th and j-th particle and used for the Riemann solver of ideal gas 255 or elastic body EoS. We should have the criterion for which approximation we should use appropriately, and this criterion will depend on the EoS. For example, in the case of stiffened gas EoS, a possible criterion that uses sound speed for solid C 2 0 and that for gas γ 0 P/ρ 260 is, If Eq. (43) is satisfied, we use the Riemann solver for EoS of elastic body, and elsewhere we use one for ideal gas EoS, for each pair of the i-th and j-th particle. In the calculation of collision between aluminum sphere and aluminum plate in Section 4.4, we use this EoS and criterion, and we can calculate without 265 any problem. For Tillotson EoS, a possible criterion is the internal energy of complete vaporization E cv , which is one of the parameters for Tillotson EoS. If the internal energy of the i-th or j-th particle is greater than E cv , we can utilize the Riemann solver for EoS of ideal gas, and elsewhere we use one for elastic body EoS. However, this method produces unphysical gradient nearby the free surface because there is no particle outside of the free surface. To prevent this problem, 275 we modify Eq. (44) as follows: Eq. (45) is also used in [25]. As pointed out by [15], the gradient of pressure that is calculated by Eq. (44) helps instability of Nyquist frequency perturbation in the negative pressure region. In the case of the perturbation of Nyquist frequency, the density and pres- [25] introduces approximate Riemann solver into the Godunov SPH method. 290 In principle, it can be used for any EoS with relatively smaller computational cost (See [26] for cares required in some cases). Mitigation of the tensile instability using the Godunov SPH method In [15], Sugiura and Inutsuka conduct the linear stability analysis of the Godunov SPH method for hydrodynamics equations, and evaluate the stability 295 against the tensile instability. They find that if we choose the interpolation method for V 2 ij appropriately depending on the sign of pressure and the number of dimensions, we can calculate stably. In two or three dimensions, linear interpolation is stable for positive pressure, and cubic spline interpolation is stable for negative pressure. Therefore, the equation of motion of the Godunov SPH 300 method for hydrodynamics is, To achieve conservation of total energy, we should use the same type of V 2 ij for the equation of energy. This result is for the equations of hydrodynamics, and it is not obvious that the same method is valid for elastic dynamics. However, in usual calculations, if 305 two particles approach each other, the deviatoric stress tensor becomes repulsive force, and this can stabilize the tensile instability. Thus we can assume that the same method as in [15] is sufficient. Indeed the test calculations of Section 4 show that we can calculate stably by this method. We describe the linear stability analysis of the Godunov SPH method for elastic dynamics in Appendix 310 A, and the result of the linear stability analysis also supports our conclusion. Therefore, in this paper, we use Eq. (47) as the equation of motion of the Godunov SPH method for elastic dynamics, V 2 ij in the time evolution equation of the deviatoric stress tensor does not contribute to the stability, thus we can use any type of V 2 ij for it. However, using the same type of V 2 ij is favourable in terms of computational cost. Cubic spline interpolation needs the gradient of specific volume. As discussed in Section 3.1, if we calculate the gradient of specific volume as, Variable smoothing length We have so far treated the smoothing length as constant in space. However, the smoothing length should be close to the average particle spacing. Thus 335 in calculations where the density largely varies in space, the smoothing length should also vary. In [16], the smoothing length of the i-th particle is defined as, where η is a constant and corresponds to the ratio between the smoothing length and the average particle spacing, and C smooth is a constant to determine the distribution of the smoothing length. η should be about 1, and throughout 340 this paper we use η = 1. If C smooth is larger than 1, the distribution of the smoothing length becomes smoother than the distribution of density. If the smoothing length is represented by spatial variable h(r), we can not integrate Eq. (11) analytically even if polynomial approximation of ρ −1 (r) is used. In [16], Inutsuka conducts integration analytically assuming that the smoothing length is h i for the half of the integration space that includes the i-th particle, and h j for the other half. Also in this study we adopt the same procedure. The equation of motion and the equation of energy for the variable Eqs. (33) and (34) for the variable smoothing length are, Also in the case of variable smoothing length, we should use appropriate interpolation method for V 2 ij depending on the sign of P i + P j to suppress the tensile instability. We define the density for the variable smoothing length as so-called "gather" formulation [27]. In the case of the variable smoothing length, we have to take into account the gradient of smoothing length to derive the equation of continuity. According to [28], the proposed equation of continuity for the variable smoothing length is as , length. In Appendix B, we conduct the linear stability analysis for the equations of variable smoothing length, and derive how large C smooth should be. Test Calculation In this section, to evaluate the validity of the Godunov SPH method for elastic dynamics, we conduct test calculations such as collision of rubber rings, 375 oscillation of plate, and impact of aluminum sphere on aluminum plate. We show that the Godunov SPH method can suppress the tensile instability even in elastic dynamics. In this study, we use simple predictor corrector method as a time integrator. This method is almost the same as second-order Runge-Kutta method. We where U = ρ, u, v, S αβ /ρ. Next, we calculate time-centered derivatives using 385 time-centered physical quantities. Finally, physical quantities of next time step are calculated as, Time step ∆t is determined by the Courant condition as, where C s,i is local sound speed at the position of the i-th particle. In this study, we use C CFL = 0.5. 390 We use the second-order Riemann solver that is describe in [16] with the modified monotonicity constraint of [15]. This monotonicity constraint is that we use the first-order Riemann solver when there are some particles with oppositesign gradients nearby their positions. This condition is written for a pair of the i-th and j-th particles as, where, and f represents ρ or P . If there is any one particle j that satisfies the condition of Eq. (59) within 3h i from the i-th particle, we use the first-order Riemann solver for the i-th particle. Here, the gradient of physical quantity f is calculated by Eq. (45). Convergence test First, we conduct a convergence test to confirm that our Godunov SPH method for elastic dynamics really achieves the second-order accuracy in space. In elastic dynamics, longitudinal wave and tangential wave exist as linear waves. In this subsection, we conduct the calculations of longitudinal and tangential 405 wave in two dimensions as a test problem for the convergence test. Here, we use simple EoS of elastic body, where C s is bulk sound speed, ρ 0 is reference density of material. In this subsection, we set C s = 1.0 and ρ 0 = 0.1. The density in the unperturbed state is ρ = 1.0, and thus the pressure in the unperturbed state is P = 0.9. We set where X = 0.001/k and k = 2π. In the case of the longitudinal wave, ω = 415 C 2 s + (4µ/3ρ)k = 2π 7/3. The initial conditions for the tangential wave are, where, in the case of the tangential wave, ω = µ/ρk = 2π. We consider the variable smoothing length with C smooth = 1.0. To measure the error, we calculate difference between the reference data as, where N tot is the total number of particles, U ref (r i ) represents the reference In Fig. 1, ǫ is plotted as a function of the average particle spacing ∆x. As shown in Fig. 1, the errors are proportional to ∆x 2 for both cases of the longitudinal and the tangential wave. Therefore, the Godunov SPH method for elastic 430 dynamics that we develop in this study shows second-order accuracy in space. One-dimensional shock tube problem using Tillotson EoS To evaluate the validity of our approximation in the Riemann solver for nonideal gas EoS, we calculate one-dimensional shock tube problem using Tillotson EoS. For simplicity, we use the equations for hydrodynamics. We use the pa-435 rameters of Tillotson EoS for basalt [5], and the unit is cgs. For comparison, we also perform calculation by the standard SPH method using artificial viscosity [3] with high resolution. The initial conditions for this shock tube problem are, ρ L = 2.72, ρ R = 2.72, We use 200 particles for each side, and the mass of each particle is m = 0.0136. In the case of calculation by the standard SPH method, we use 2000 particles γ eff,L and γ eff,R are largely different in this case, and thus this problem provides a severe test. Figure 2 shows the result of this shock tube problem calculated by our Godunov SPH method and the standard SPH method. As we can notice from Fig. 2, the results of the Godunov SPH method us-450 ing the Riemann solver for ideal gas EoS and the standard SPH method are almost the same. Therefore, our approximation method can describe shock waves correctly even if EoS is for non-ideal gas. In particular, our Godunov SPH method is valid for hypervelocity impact because the Godunov scheme can treat extremely strong shock waves accurately. Collision of rubber rings in two dimensions Gray et al. [10] calculate collision and bounce off of two rubber rings to evaluate the effectiveness of their method against the tensile instability. If we conduct this calculation without any prescription against the tensile instability, numerical fragmentation occurs in the simulation and we can not calculate 460 bounce off of rubber rings. They prevent the tensile instability by introducing artificial stress. In this subsection, we conduct the same simulation using the Godunov SPH method for elastic dynamics. Also in this subsection, we use EoS of Eq. (61). The density is scaled using ρ 0 , the velocity is scaled using C s and the length is scaled using the width of 465 ring w. We adopt constant smoothing length because in this simulation density is almost constant, and the Riemann solver for elastic EoS is used. We place two rings with 1w separation. The inner radius of rings is 3w, and the outer radius is 4w. These rings collide with the relative velocity of 0.118C s . The particles are put on the square lattice with the side length of 0.1w within two rings. The smoothing length is h = 0.1w, and we set shear modulus to µ = 0.22C 2 s ρ 0 . Initial density of each particle is set to ρ 0 , and all components of initial deviatoric stress tensor is set to 0. The same condition for initial density and deviatoric stress tensor is adopted for subsequent test calculations. Oscillation plate in three dimensions To evaluate the validity of our method in three dimensions, we calculate 485 oscillation of elastic plate, one edge of which is fixed. The same test calculation is done by Gray et al. [10]; however, this calculation is in two dimensions. Analytical solution of oscillation of extremely thin plate can be found in [29]. We use the same EoS and unit system as those of Section 4.2 except for the unit of length. In this section the length is scaled using the thickness of plate 490 H. We also consider the case of constant smoothing length that is the same as initial particle spacing. The length of plate L is 11H (x-direction) and the width is 2H (z-direction). The particles are put on the square lattice with the side length of 0.1H within this plate. The shear modulus µ is 0.5C 2 s ρ 0 . Gray et al. expressed fixed edge by putting the plate between two layers of SPH particles 495 that are not allowed to move. Here, for simplicity, we fix the particles that are located within 1H from left end of the plate. The initial velocity distribution is the same as that of [10]. The velocity of y-direction v y at the position of x-direction x is given by, where V f is the velocity at the free edge of the plate, On the other hand, from Fig. 5, we can calculate the oscillation stably if the method of this paper is applied. We confirmed that this oscillation continues stably until many periods. 510 The artificial stress of [10] requires the procedure as follows: first we rotate a frame of reference to diagonalize the stress tensor. Then if each diagonal part is positive (i.e. tensile stress), we added the artificial stress to that part. Finally we rotate again a frame of reference to original coordinate. In this procedure, we need to derive eigenvalue and eigenvector of the stress tensor of each particle. 515 We can derive eigenvalue and eigenvector analytically in two-dimensional case. However, in three dimensions, to derive eigenvalue and eigenvector we have to use numerical method such as Jacobi method [30]. In contrast, our method does not require time consuming procedure, and we just need to select appropriate interpolation method. 520 According to [29], the angular frequency of extremely thin plate is written as, where E is Young's modulus, ν is Poisson's ratio. E and ν are expressed as, where K is the bulk modulus, In the case of EoS of (61), K = ρ 0 C 2 s . The angular frequency of this calculation 525 is ω = 0.01201. Thus analytical period of oscillation of plate in the limit of infinitesimal thickness is the following: Oscillation period of our simulation is T sim ≈ 665. We expect that the difference between the period observed in our simulations and the period of infinitesimally thin plate decreases with decreasing the ratio of the thickness to the length we use the same material properties as those of [31]. In this subsection, we use cgs unit. The radius of aluminum sphere is 0.5 In this test calculation, average particle spacing largely varies due to hypervelocity impact. Thus we use the variable smoothing length. C smooth is set to 2.0 to suppress the tensile instability at negative pressure region caused by variable smoothing length. As explained in Section 3.1, we select the Riemann 560 solver for ideal gas EoS or simple EoS of elastic body using the criterion of Eq. (43). To introduce the effect of plasticity of aluminum, we adopt elastic-perfectly plastic model using von Mises yielding criterion [4]. In this model, we limit the deviatoric stress tensor that is used for time evolution equations as, where, Y 0 is a yielding stress, and J 2,i is the second invariant of the deviatoric stress tensor defined as, Y 0 is set to 3.0 × 10 9 [dyne/cm 2 ]. Figure 7 shows the result of calculation when we use appropriate interpola-570 tion depending on the sign of pressure. Figure 8 shows the result when we use only linear interpolation independent of the sign of pressure, and Fig. 9 shows that when we use only cubic spline interpolation. All results are plotted at t = 8µs. As we can notice from Fig. 7 and 8, if we select interpolation or use only 575 linear interpolation, there is no void at the surface of collision. Fig. 9 shows the appearance of void in the case of cubic spline interpolation. Actually voids appear in the compressed regions where the pressure is positive. This is not surprising since cubic spline interpolation in two dimensions is known to be unstable in the positive pressure regime. In [9], instability in compressed region 580 is called pairing instability. To do a reasonable numerical simulation with Godunov SPH method, we need not only to use appropriate interpolation, but also to use an appropriate monotonicity constraint and smoothing length. To show the importance of using an appropriate monotonicity constraint, we calculate the same simulation 585 without the modified monotonicity constraint of Eq. (59). In addition, to investigate the importance of using the appropriate smoothing length, we conduct the simulation using constant smoothing length with h = 0.02 [cm]. Here, in both simulations, we select interpolation method depending on the sign of pressure as in Fig. 7. Figure 10 shows the result without modified monotonicity 590 constraint, and Fig. 11 shows that with constant smoothing length. In both cases of Fig. 10 and 11, we can see small void. The pairing instability in the positive pressure is essentially caused when the particle spacing is much smaller than the smoothing length. In that case particles can not push back each other, and result in clustering. 595 According to the test calculations of [14], the void is created in the case of the standard SPH method with general artificial viscosity. This implies that numerical dissipation due to artificial viscosity term is not sufficient to prevent the pairing instability at surface of collision. Dissipation due to the Riemann solver Calculation of restitution coefficient Finally, to show that our Godunov SPH method for elastic dynamics can be used for describing practical experiments, we calculate the restitution coefficient 605 in the impact of steel sphere on steel plate. Aryaei et al. [32] measure the restitution coefficient by dropping steel or aluminum sphere on steel or aluminum plate, and investigate the dependence of sphere diameter on the restitution coefficient. The restitution coefficient is calculated from height that spheres jump up. As a result, they find that the 610 restitution coefficient is decreasing with increasing sphere diameter. They also analyze the restitution coefficient by Finite Element Method and show the same dependence. In this subsection, we simulate the impact of various-size steel spheres on steel plate with the Godunov SPH method for elastic dynamics. In the experi- In the calculation of Finite Element Method of [32], the number of element for sphere is fixed independent of the size of sphere. Thus we also use the same number of particles for every size of spheres. SPH particles are put on the square 635 lattice with the side length of R/20, where R represents the radius of sphere. In other words, we put twenty particles along the radial direction. In this subsection, we use constant smoothing length with h = R/20, and use EoS of Eq. (61). We can find material density, Young's modulus E and Poisson's ratio ν of steel in [32]. Reference density for EoS ρ 0 is set to material 640 density of steel, ρ 0 = 7.57[g/cm 3 ]. Sound speed for EoS C s is calculated from Young's modulus and Poisson's ratio as, where K is bulk modulus. sphere and particles that consist plate, we permit only the repulsive force along the line joining two particles. In particular, acceleration of the i-th particle exerted by the j-th particle a ij is calculated as, if the i-th and j-th particle represent different solid (sphere or plate). Here, of Eq. (16), and The restitution coefficient decreases with increasing sphere diameter because the mass of sphere increases. If the mass increases, force applied to surface of collision becomes large and plastic deformation becomes large. In that case energy dissipation by plastic deformation increases, so that the restitution coefficient decreases. 675 Although we need to examine the validity of plastic model or parameters such as shear modulus, simulations with the Godunov SPH method for elastic dynamics seem to reproduce the result of experiments reasonably well. Summary In this paper, we extended the Godunov SPH method to elastic dynamics. 680 On the basis of the formulation of the Godunov SPH method, we formulate the equation of motion, the equation of energy and the time evolution equation of the deviatoric stress tensor. We confirmed that these formulated equations achieve the second-order accuracy in space by convergence test. Moreover, we develop the method to handle the Riemann solver for non-ideal gas equation of 685 state. Next, we apply the stabilizing method for the tensile instability of [15] to elastic dynamics, and conduct several test calculations such as rubber rings collision, oscillation plate and impact of sphere on plate to evaluate the validity of our method. We confirmed that the method to suppress the tensile instability using the Godunov SPH method for hydrodynamics equations developed by [15] 690 is also valid for elastic dynamics equations. This stabilizing method is selecting appropriate interpolation method for V 2 ij depending on the sign of pressure. The results show that if we select appropriate interpolation method for V 2 ij we can calculate stably. To suppress the tensile instability in the calculation of hypervelocity impact, we should also consider about monotonicity constraint or the way to treat the smoothing length, and we confirmed that Godunov-type scheme is valid for such problems. We hope that we can use our method to solve various problems in elastic dynamics. Impact Eng. 37 (2010) 1037-1044. Appendix A In this Appendix, we conduct linear stability analysis of the Godunov SPH method for elastic dynamics. Particle spacing is affected by longitudinal wave, and instability of longitudinal wave causes the tensile instability. Thus we con-785 duct the linear stability analysis for longitudinal wave. We neglect discretization in the direction of time and assume infinitely-accurate time integration, because the tensile instability does not depend on time integration method. We assume that the mass of each particle m is the same for all particles. Constant smoothing length is considered. We conduct linear stability analysis for vari-790 able smoothing length in Appendix B. To separate the effect of viscosity and the tensile instability, we do not use the Riemann solver for P * ij , but assume P * ij = (P i + P j )/2. In unperturbed state, the particles are put on the square lattice with the side length of ∆x. This unperturbed position is expressed as, We add the perturbation to the component of x-direction. Perturbed positions of particles are written as, where ǫ x is infinitesimal constant, k and ω represents wave number and angular frequency of perturbation respectively, i that is not subscript shows imaginary unit. Hereafter, ǫ represents infinitesimal constant, and we neglect second or 800 higher order of infinitesimal values. From Eq. (A2) andṙ i = v i , the velocity of the i-th particle becomes, We define the density in unperturbed state as ρ, and we write the density of the i-th particle as, From Eq. (36), we can write δρ i using δx i as, From Eqs. (A4) and (A5), density is represented as ρ i = ρ(1 − iDδx i ). Note that this representation of density is the same as that is calculated by Eq. (9) as shown in Appendix B of [15]. Therefore, the stability does not change even if we calculate density by Eq. (9) or we use time-evolved density by Eq. (36). The pressure of the i-th particle is represented as P i = P + δP i = P + 810 C s 2 δρ i = P − iC s 2 ρDδx i , where P and C s represents the pressure and the sound speed in unperturbed state respectively. We define xx component of the deviatoric stress tensor in unperturbed state as S xx , and we write S xx /ρ of the i-th particle as, We substitute Eqs. (A6) and (A7) into Eq. (32), and then we obtain, Using Eqs. (A8) and (35), the deviatoric stress tensor of the i-th particle can be written as, Here, for the perturbations of long wavelength D ∼ k and a ∼ −k. )]a and P → P − S xx . This is the same for all interpolation methods. Therefore, the stability depends on the sign of P − S xx . In usual simulation, P > 0, S xx < 0 for compressed region and P < 0, 835 S xx > 0 for tensile region. Therefore, it is sufficient that we select appropriate interpolation method depending only on the sign of pressure. We may expect, in principle, even if pressure is positive, a region becomes effectively tensile dominant due to strong side slip force, and criterion of the sign of pressure may not be sufficient. In that case, using r i − r j direction 840 component of the deviatoric stress tensor S ss i and S ss j , criterion of the sign of P i − S ss i + P j − S ss j may be effective. According to our experience on test calculations, however, this criterion does not seem to be required. Appendix B In Appendix B, we conduct the linear stability analysis of equations for vari-845 able smoothing length. For simplicity, we use the equations for hydrodynamics of the Godunov SPH method, and we use η = 1. We treat smoothing length as constant when we linearize density, because density distribution in the case of variable smoothing length is almost the same as that in the case of constant smoothing length. The positions of particles are the same as those of Appendix 850 A, and we also neglect the second or higher order of infinitesimal values. We write the smoothing length of the i-th particle as, From Eq. (49), we can express ρ * i as, where, Then we can express h i using Eq. (49) as, m/ρ * means the smoothing length in unperturbed state. Thus h = m/ρ * . From Eq. (B4), δh i can be expressed using δx i as, The equation of motion of the Godunov SPH method for hydrodynamics in the case of variable smoothing length is, Substituting linearized density, pressure and smoothing length into Eq. (B6), we 860 obtain, where ω 2 constant h is ω 2 in the case of constant smoothing length, which is written in [15]. The formula of ω 2 constant h is different for linear interpolation, cubic spline interpolation and quintic spline interpolation. For perturbations with any frequency lower than Nyquist frequency, a s < 0, 865 c s > 0 and b s is positive constant that does not depend on wave number. Thus, in the case of negative pressure, the term of variable smoothing length makes ω 2 negative and the method in unstable. At Nyquist frequency a s = 0. For perturbations with smaller wavelength than C smooth h, a s becomes almost 0. In consequence, extension to variable smoothing length can make perturbations of 870 longer wavelength than Nyquist frequency unstable even if this perturbation is stable in the case of constant smoothing length. However, if we make the value of C smooth larger, a s becomes smaller and we can make this perturbation stable again. According to [15], ω 2 constant h can be decomposed into the term that becomes 875 C s 2 k 2 at long wavelength and the other error terms, As we can notice from Eqs. (B7) and (B8), only the first term of ω 2 variable h is proportional to C s 2 , and all the other terms are proportional to P /ρ. Thus we can evaluate whether arbitrary state (including spatial dimension, interpolation method and C smooth ) is stable or not only by (P /ρ)/C s 2 . Conversely, for arbi-880 trary spatial dimension, interpolation method and value of (P /ρ)/C s 2 , we can evaluate the minimum C smooth to achieve stable simulation. In [15], in the negative pressure region, Sugiura and Inutsuka (2016) use quintic spline interpolation for one dimension, cubic spline interpolation for two dimensions, and cubic spline interpolation for three dimensions. Thus, we 885 investigate which pair of (P /ρ)/C s 2 and C smooth provides stable calculation for these three cases. Figure 13, 14 and 15 show the results of this investigation for quintic spline interpolation in one dimension, cubic spline interpolation in two dimensions, and cubic spline interpolation in three dimensions respectively. In the Fig. 14, curve extends vertically around (|P |/ρ)/C s 2 ∼ 3.5. This is 890 owing to constant smoothing length term, and if (P /ρ)/C s 2 is smaller than -3.5, calculation becomes unstable even with constant smoothing length. However, (P /ρ)/C s 2 ∼ −3.5 can not be realized in usual calculation. If we assume the equation of state of P = C 2 s (ρ − ρ 0 ), the density of ρ ∼ 0.22ρ 0 is required to achieve (P /ρ)/C s 2 ∼ −3.5. In other words, material should be stretched until 895 the density becomes five times smaller than the average density. In that case ordinary material should break up. We express C smooth on the curve of figures as C smooth,crit . In the region of negative pressure, the calculation is stable if C smooth is larger than C smooth,crit . For convenience, we made fitting formula for this C smooth,crit . Fitting formula 900 is expressed as, Here, we use data point of (|P |/ρ)/C s 2 < 3.5 for two dimensions and cubic 905 spline interpolation. Large computational cost is required if C smooth is large. Thus, in practical calculation, we just make C smooth larger in negative pressure region locally, and for positive pressure region C smooth = 1.0 is sufficient. We can calculate C smooth,crit of the i-th particle using physical quantities of this particle as, 910 C smooth,crit,i = A ln[B(X i − C)], and C smooth of the i-th particle can be calculated as, where ǫ margin is small value for safety. ǫ margin = 0.1 is sufficient. In this case, we can obtain smoothing length of the i-th particle by substituting C smooth,i for C smooth in Eq. (49).
2016-12-13T09:17:59.000Z
2016-05-25T00:00:00.000
{ "year": 2017, "sha1": "19ae74698b343b686560b6adc50a9a4bfa1b2693", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1605.07706", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "19ae74698b343b686560b6adc50a9a4bfa1b2693", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
259518525
pes2o/s2orc
v3-fos-license
Favouring Middle-and Upper-Class Students? The Structure and Process of Attending China’s Selective Universities Research suggests the increasing influence of family socioeconomic status, as measured by parents’ income and occupations, in access to Chinese higher education. Yet, the literature remains inconclusive about the extent to which the social background of rural and urban students is associated with academic and social performance at elite universities. We address this limitation by looking at the academic and social success of representative samples of first-and second-year students enrolled at four Chinese elite universities. Our aim is to understand the characteristics that students from both urban and rural environments bring with them and how those characteristics bear on academic and social performance in university. We found an overrepresentation of students from middle-and upper-class backgrounds in both urban and rural student groups. The fact that the process indicator of cultural capital has a direct association with social success suggests students from urban areas exhibit traits valued in the selective university environment. Introduction The number of Chinese higher education institutions has increased from 1,022 in 2008 to 2, 651 in 2017, accompanied by an increase in the gross higher education enrolment rate from less than 9.76% to 45.7% (MOE, 2018). Yet, research suggests students from rural communities, which represent half of the population, are grossly underrepresented, especially in top-tier institutions (Li, 2015). Since the 1990s, researchers saw a gradual decline in the proportion of rural students in China's most selective universities. The proportion of rural students in Peking University, for example, has fallen from around 20% in the end of the 1990s to around 12% in 2012 (Sun, 2017). To offset this problem, the government introduced quota initiatives to increase recruitment of students from designated poor rural communities (Xie, 2015). Yet, a question remains as to whether the policy has brought greater equity in higher education. Decades of market reform, including a process of de-collectivization and privatization, have stratified both urban and rural society. This has exacerbated differences in wealth, power and prestige among social groups (Bian, 2002;Davis & Feng, 2009). These wealth gaps add complexity to the analysis of access and equity. Research suggests an increasing influence of family socioeconomic status, as measured by the impact of parents' occupations and income on higher education access and equity. Wang (2015), for example, notes that students from the middle and upper social backgrounds, including families of professionals and government cadres, have greater access to the most selective universities. Controlling for the effect of socioeconomic status, Chen (2015) argues that rural students from villages and small towns outperformed their counterparts from county-level cities and county-level towns in gaining access to Project 211 universities 4 . This suggests that rural parents from upper and middle strata families are more able to capitalize on their family resources and to gain an edge over their rural counterparts (Author, 2016). It also suggests that the rural-urban dichotomy in explaining admission to university has shifted more toward income and proximity to urban life. These findings suggest a mixed influence of factors amidst a newly evolving social structure (Cao, 2019). It also provides sound theoretical reasoning for expecting an underrepresentation of rural students, from more disadvantaged backgrounds, in China's most selective universities. Yet, the literature remains inconclusive about the composition and characteristics of the rural student population in those selective colleges and universities. Studies of students in those elite institutions lack data on family structure, socioeconomic status, and the types of schools they attended before entering colleges and universities. In short, the traits of the rural students are still not well understood. Beyond that, there is a need for a systematic empirical study on how students from different types of rural backgrounds negotiate the academic and social currents in elite university environments, in comparison to their urban peers. In this paper, we address this limitation by looking at the academic and social success of representative samples of students entering four elite universities in China. We do this to understand the traits that urbanized and ruralized students bring with them into tertiary education, and how these bear on academic and social performance in university. Chinese Context Within less than 20 years, Chinese higher education is becoming a high participation system (HPS), already having in 2016 a Gross Tertiary Enrolment Ration (GTER) of 42.7%, and now approaching 50% in 2020 (Sun, 2017). This was the result of the interplay of several social factors. Driven by the 1998 Asian financial crisis, China launched an unprecedented expansion of higher education. This was first an effort to keep more young people out of the labor market until the crisis wound down, and, second, to stimulate the economy by responding to pent up demand from large-savings households by introducing a cost-sharing fee-paying model (Mok, 1999). This was encouraged by human capital-oriented policy makers (Xie, 2016). It also coincided with a knowledge economy discourse advanced by the World Bank (World Bank, 2000). By 2010, further expansion aligned with an economic restructuring move from a labour intensive, low investment, export-oriented system to a high-tech manufacturing and service-based economy that relies more on domestic consumption. World-class universities and mass higher education became essential for making the transition along with the political agenda in building a strong and modern national state (Xie, 2016). A rapidly growing urban middle class sought universities for their children that would confer status culture and sustain their social mobility (Farrell et al., 2006). Credentialism grew in a competitive labour market that had to absorb upwards of eight million college and university graduates each year (Altbach and Umakoshi, 2005;Collins, 1979;Marginson, 2011). Elitism has long been part of Chinese higher education system, even in the socialist era. In 1954 the Chinese government named and generously funded two key universities (Tsinghua University and Harbin Institute of Technology) as a strategy of building a first-class university to train political and military leaders. Different policies followed in assigning a status of key university in the higher education system. One aim was to improve the quality of the higher education sector, and another was to build a few world-class universities. Several national excellence initiatives to identify world-class university followed in 1993, 1998, and 2015. The designated universities received a major government investment which made them substantially outperformed their competitors in the higher education system in terms of the number of faculty with Ph.D. holders and research output (Johnes & Li, 2008). These exclusive universities only admit a tiny fraction (under 2% on average) of the high school graduates, in China (Cao, 2019). It is unsurprising that rising income inequalities (across regional, urban-rural, income, gender and ethnicity) amid rapid economic growth has led to concerns about patterns of access to elite universities. While rapid expansion has led to increased access to higher education by rural students, structural inequality has become to solidify (Li, 2015). For rural secondary school graduates born in the 1970s, gaining access to higher education was nearly similar to their urban counterparts. This began to change with the cohort born in the 1980s. Compared to their counterparts in urban areas, the rural secondary school graduates have a markedly less chance than their urban counterparts of gaining access to selective colleges and universities (Li, 2010). In a system still influenced by its Confucian heritage, the public believe higher education should be a great equalizer for a better society. Admission to higher education institutions is almost entirely based on academic criteria, heavily determined by one national examination commonly known as the gaokao. To ensure that rural students have an equal chance to gain a high enough score on the gaokao, reforms were initiated beginning in 2000 to improve the quality of rural schools. A new system of financing rural education (Notice on the Deepening of Reforms on Mechanisms for Guaranteeing Funding for Rural Compulsory Education) was introduced in 2005, which exempted rural students from paying various schools related and textbook fees. Cost sharing by central and provincial governments helped to improve the recruitment of qualified teachers in village and small township schools. Six of the country's leading Normal Universities (that emphasize teacher education) initiated free tuition and special allowances for graduates who would teach in rural schools (Hallinger & Liu, 2016). Measures were also instituted to decrease the gap between the prosperous eastern coastal regions and the less developed central and western areas of the country. A Collaborative Plan to Improve College Enrolment in Central and Western Regions was introduced in 2008, and universities in the eastern coastal cities were asked to set up quotas (35,000 in 2008 and 210,000 in 2016) of students from 14 provinces in the central and western areas. In 2012, the Chinese central government started a new policy to Revitalize Higher Education in the Central and Western Regions 2012-2020 by investing 10 billion Renminbi (RMB)in 100 higher-education institutions to help repair infrastructure, improve teaching and research capacity, and provide more places for students from the central and western areas. Moreover, a Scheme for Enrolling Students from Poor and Rural Areas was introduced to increase rural students' participation in leading universities. The number of rural freshmen in those first-tier universities was reported to have increased by 10% in 201210% in , 8% in 201310% in , and 11.4% in 201410% in (MOE, 2014. Literature Review and Theoretical Framework Without a doubt, the higher education system has pushed back on the increased inequality by admitting more students from diverse social backgrounds, including ethnic minority students from among a minority population of 120 million, whose minority status grants them extra points added to their gaokao scores. While traditionally underrepresented groups have increased their access to higher education, top tier admission has remained unequal (Mok, 1999). In a neoliberalism discourse, individuals and families are assumed to take on a significant amount of responsibility for the cost in higher education (Ball, 2016). Reforms in higher education admissions have highlighted the family's role in preparing their children for the university's independent recruitment process, which gives an advantage to those students who have had intensive parenting in their early years (Liu & Pensiero 2016). As already noted, opportunities for university attendance for the rural cohort born in the 1940s, 1950s, 1960s, and 1970s were found to be similar to their urban counterparts after controlling for the effects of gender and the father's occupation and education level. Yet, for the cohort born in the 1980s and 1990s, urban students outperform their rural counterparts 1.7 times more effectively in gaining admission to higher education (Li, 2015;Wu and Li, 2017). The fact that some studies also argue that gap have been narrowing in the past 30 years has intensified the debate (Wang, 2015). Furthermore, for those arguing for a decreasing gap in terms of access to higher education, they still claim that the gap between those from such traditionally advantaged social groups as the cadre, the professional and the new economic elite, and those disadvantaged social groups are still there (Wu, 2017;, a trend revealed in maximum maintained inequality theory in other social and cultural context (Liu & Pensiero, 2016). Furthermore, in the soon-to-be HPS, with a continuing elite sub-sector, the binary structure has become ternary with high-value inclusion, low-value inclusion, and exclusion (Marginson, 2011). The world-class university movement tends to promote vertical stratification with middle class families gaining a positional competitive advantage (Marginson, 2016). Does the differentiation of value favor those "born into privilege"? Wang (2015) argues that students from those higher socioeconomic backgrounds enjoy substantial advantages in going to the elite sector of HEIs. Chen (2015) examined the differentiated patterns of access by students from different socioeconomic backgrounds to different higher education sectors and argued that students from cities and those from families of higher socioeconomic status outperformed their peers in selecting and gaining access to the elite HEI sector. Other studies yield similar findings. Liu and his colleague (2015) find a similar pattern when examining the influence of fathers' party membership on the access had to elite selecting universities. They point out that students coming from communist party members' families are more representative in the upper tier (Projects 211 & 985) 5 universities, which suggests an importance of political capital (Liu & Yao, 2015). The inevitable family-based inequalities in terms of economic, social and cultural resources affect access to selective HEIs. They put those families with prior social advantages in a better position to translate earlier educational achievement into more successful life trajectories. Therefore, an examination of the horizontal inequity in higher education should not only look at the socioeconomic difference in access but also the differentiated pattern of success in and through the higher education experience. Those socioeconomically advantaged families tend to leverage their initial success into further advantages in competitive settings. Sociological studies surmise that young people from lower socioeconomic backgrounds are more likely than those of better standing to encounter barriers to full integration into an elite selecting environment (Granfield, 1991;Thiele et al., 2017). In some countries, they achieve less in learning and spend more time in working offcampus (Jack, 2016). Students from socio-economically disadvantaged backgrounds arrive at the university with more uncertainty. They tend to feel like cultural outsiders (Ostrove & Long, 2007;Lehmann, 2014). Their dispositions, which have been shaped heavily by the social milieu from which they come, leave them with a higher potential to face confusion, conflicts, and struggle in the elite HEIs, a social environment which is unfamiliar to them (Bourdieu, 1977;1984). To investigate the vertical differences in success in and through higher education, this study looks at the academic and social experience of rural students at China's elite universities. Those theoretical bases mentioned above indicate the possibility of generating specific hypotheses. As rural students have less access to capital in different forms, such as human, economic, and cultural capital, these can lead to lower levels of academic and social success in elite selecting universities. To investigate the patterns in academic and social experience of rural students at elite universities, the study draws from the tradition of sociological studies in which family background effects on educational inequalities are decomposed into "structural" and "process" dimensions (Coleman, 1990). While the structural dimension of family influence refers to those factors related to the jobs, income, and education of the parents, the process dimension of family influence speaks of factors indicating parents' participation in and ability to use resources at their dispersal to improve their children's learning performance (Lareau, 2011;McNeal, 1999). Massey et al. (2011) and his colleagues contend that these two concepts are linked to student integration into the university environment. Breaking family influences up into structural and process dimensions has important implications. Utilizing data on the characteristics that students carry with them to universities, we might be able to trace those academic, financial, and social challenges that rural students confront back to the source. Therefore, we hypothesize that the equity-enhancing interventions in these challenges put students under intense pressure, forcing them to upset their academic aspirations, thus weakening their commitment to the university institutions they attend. This paper helps to evaluate the equity by introducing a new dimension in horizontal inequality. Research questions: Hypothesis 1: Rural and urban students in elite universities were advanced individuals in their own group. Hypothesis 2: Rural students were less likely to be academically and socially successful than their urban counterparts. Hypothesis 3: After controlling for rural-urban status, parental involvement, socioeconomic background, and senior secondary school background had limited capacity in predicting academic and social success. Methods This study uses data from the Universities Student Experience Study (USES), a longitudinal mixed-method study (survey with follow-up interviews) that provides a portrait of students at two intervals of their university education. The first wave of data collection was in June 2014, and the second wave was in October 2015. This paper uses quantitative data from the survey. A probability sample, proportional to size (PPS) strategy was used to select the participants for USES. We contacted the four universities for full class lists of the freshmen entering the universities in 2013. Then, we separated the population into three broad strata based on disciplines of faculties and schools: arts and humanities, science, and engineering and medical. For each stratum, we listed classes and their population sizes. Then, using the PPS method, we calculated the cumulative sum of the population sizes and determined the number of clusters (classes) that would be sampled. All the individuals in the selected classes were invited to complete the survey. The different stages were combined to ensure that the sample is representative of the four universities. In total, 1,936 students completed the first wave of the survey, which generated a 96.8% response rate, and 1,661 students responded to the second wave of the survey, for a response rate of 83.1%. USES has two components. The first includes 47 items which measure students' social origins and retrospectively assess students' family status, past school experience, and neighbourhood environment at three important junctures (primary school, middle school, and high school). The second component contains 19 items and concentrates on assessing students' social and learning experiences, as well as their social and academic successes during their freshman and sophomore years. The survey instrument is an adaptation and extension of a pre-established baseline survey, the Survey of College Life and Experience, used in the U.S. (Bowen and Bok, 2016). We used forward-translation and an expert panel of consultations to produce a conceptually equivalent Chinese version of this instrument. A pilot study (n = 50 students) was conducted in January 2014. The Cronbach's alpha reliability coefficient of most of the constructs in the pilot test was above .8 which suggests an appropriate instrument internal consistency. Figure 1. illustrates the main indices introduced in the study, including students' social origins, demographic background, parental involvement, students' socioeconomic background, senior secondary school background, academic success, and social success. Firstly, social origin, a binary variable (rural/urban), was measured by the administrative area of the permanent home address. Permanent home address such as the municipality, provincial capital (or equivalent administrative area), city, or county was considered as urban. In contrast, a permanent home address in the village (xiang) or small town (zhen) was considered as rural. In the official national definition, those from county townships are considered rural. Based on the rapid urbanization since 2010 with a rising standard of living and average incomes across the nation that the official category rural should exclude those living in increasingly urbanized county townships. In short, ongoing urbanization has produced a quite different country context with a county student population that is at a greater advantage than those in villages and small towns. Secondly, the demographic background indices include ethnicity (Han or non-Han), gender, single-child family status, and household geographic location. Thirdly, parental involvement was conceptualized to indicate selected aspects of human capital cultivation, social capital cultivation, and cultural capital cultivation. Specifically, human capital cultivation is represented by the parents' decision to hire a tutor to help with their children's academic performance. Social capital cultivation is measured by their communication and interaction with their children's teachers and friends. Cultural capital cultivation refers to visits to museums, zoos, theatres and sports events, as well as leisure travel experiences. Parental involvement is a continuous measure, and the sum score of the corresponding items is calculated to represent the three sub-concepts. Fourthly, student socioeconomic background was conceptualized into four sub-concepts, namely, father's occupation, father's education status, mother's education status, and family financial status. Father's occupation was further classified into four categories: manager and civil servant; professional; clerk; and working class/ peasant farmer. The level for parental education is indicated by graduation status from college or university (zhuanke/benke); senior secondary school (gaozhong); junior secondary school (chuzhong); and, some or all primary school (xiaoxue). Additionally, family financial status is measured according to annual income; as well as the size and the value of the house or apartment owned by the family. Senior secondary school background refers to the environment of the schools which the student attended before university, be it a key school, a state school, or some other; the general quality of the school; and the academic peer support. The general quality of the school was measured by a 3-point Likert scale (where 1 indicates poor, and 3 indicates excellent), and the peer support of academic effort was measured by a 5-point Likert scale (where 1 indicates no support at all, and 3 indicates fully support). Fifthly, academic success is measured by the students' grade point average (GPA) in the university during the first year. Finally, social success was measured by the students' voluntary participation in university organizations. It has been shown that student leadership is an indicator of their ability, popularity, and social engagement (Lin, Sun & Yang, 2015). We compare, in the Chinese context, the semi-official student organizations are more exclusive. The data were analysed with SPSS (version 24.0). The listwise deletion was used to remove the missing values. Descriptive and t-test analyses are presented to examine whether there is a statistically significant difference between urban and rural students with respect to academic success. Likewise, the authors used a chisquare test to detect statistically significant differences between urban and rural students with respect to social success. Linear regression and logistic regression analyses are adopted to test whether or not parental involvement, socioeconomic background, and senior secondary school background predict academic success, as well as social success. For all tests, the alpha level for statistical significance was set at 0.05. Descriptive Analyses The proportion of rural students represented in our sample varies by institution (see in These percentages indicate that rural students are disadvantaged because the official rural population proportion of the country is 50%. The discrepancy in the representation of urban and rural students is even greater if we rank our four sample universities by national prestige, as university [D] is ranked slightly behind the other three universities. Table 2. presents the demographics information separately for urban and rural students. Table 3. shows the students' socioeconomic background. Overall, our data suggest an underrepresentation of students, whether urban or rural, from lower-SES backgrounds. For example, for urban students, only 3.0% of their fathers were peasant farmers or in the working class. For rural students, less than 20.6% of their fathers were peasant farmers or in the working class. Yet, China's 2010 National Population Census suggests that in the year of 2010, over 70% of people are in these two categories (Lu, 2010). This suggests that children from the middle and upper class in both urban and rural China are favoured in the university admission process. With respect to parents' education status (both father's and mother's), the patterns are similar. Urban students have parents with higher educational attainment. For example, 59.3% of urban students' fathers but only 11.4% of rural students' fathers completed college or university degrees. Likewise, 49.2% of urban students' mothers but only 6.5% of rural students' mothers completed college or university degrees. Yet, urban and rural students' parents have a level of educational attainment that is far above the national average in their own social groups. According to the data of the sixth national census (National Bureau of Statistics of China, 2010), there are around 84% have an education that is at (including and above) junior secondary school (equivalent to middle school degrees) for the urban residents. Yet, in our dataset, about 96.6% of fathers (59.3% college/university, 24.7% senior secondary, and 12.6% junior secondary), and 93.7% of mothers of urban students have an education at junior secondary school. At the national level, around 55% of rural residents have an education that is at junior secondary school. In our data, about 79% of fathers and 62.1% of mothers of rural students have an education from junior secondary school. Regarding family financial status, both urban and rural students in elite universities are more likely to come from upper-income families in their own regions. The findings show that the average annual family income for the urban students in this study in 2013 is 162, 000 RMB, which is higher (top 20% in urban incomes) than the national average annual family income in the urban area of 83,800 RMB in 2013 (Chen and Ni, 2018). For rural students in this study, the average annual family income in 2013 is 59,100 RMB, which is higher (top 20% in rural incomes) than the national average annual family income in the rural area of 38,200 RMB (Chen & Ni, 2018). This suggests an advantage enjoyed by those middle-and upper-class families in both urban and rural areas. The ownership of houses/apartments is another indicator of a family's financial status. For urban students in this study, the estimated average value of their families' houses/apartments is 1.1 million (RMB). On average, rural students estimated the value of their families' houses/apartments to be 0.35 million (RMB). Although the self-reported prices were not free of measurement errors, the authors consider that these values were not significantly higher or lower than the average house price on the market in 2015. Note that there was no specific information available in the national report or previous publication regarding the estimated average value of families' houses/apartments for urban and rural residents, and the average price for 37 major cities in China was 8,823 RMB per square meter in 2015 (National Bureau of Statistics of China, 2016). Both 0.35 and 1.1 million RMB across China (including major/non-major cities, and non-cities) suggests that the university students in this study are from more affluent backgrounds within their own social groups (rural or urban). A notable feature of the selective university students documented by researchers was the overrepresentation of graduates from the so-called "No.1 Middle School" or "Key School", the best public key school in each province, city, and county. National Bureau of Statistics of China (2016) indicated that in total there were 2.7% secondary schools nationwide were key schools (at provincial, city, or county level). Wu & Huang (2017) suggested that key schools are more exclusive of students from rural and low socioeconomic backgrounds. The dataset for this study shows that 92.5% of urban and 89.5% of rural students graduated from key schools. Specifically, 55.2% of the urban and 31.3% of the rural students were from a province-level key school (see Table 4.). Nearly 24.5% of both urban and rural students were from a city-level key school. About 12.8% of urban but 33.7% of rural students were from county-level key school. In this study, a small proportion of students (5.9% of urban, and 8.1% of rural students) were from ordinary (non-key) schools. Few (6.6% urban and 5.6% rural) students were from private (minban) schools, which were considered a less advantaged school type in China. Urban and rural students in this study highly rated the quality of their high schools, including the facilities, teaching, as well as school reputation, which suggest a particularly good school climate. They (urban and rural) also reported getting sufficient support from their peers in secondary schools. Table 5. documents the process dimension of parental involvement. Human capital cultivation, social capital cultivation, and cultural capital cultivation are three indicators of parental involvement. The average human capital cultivation score for urban students is 0.69 which is higher than 0.34 for rural students, where the larger values suggest the parents were more likely to hire a tutor to help with their children's academic performance. The average social capital cultivation score for urban students is 2.69 which is higher than 2.42 for rural students, where the larger values conclude that the parents know and frequently communicate/interact with their children's teachers and friends. The average cultural capital cultivation score for urban students is 2.35 which is higher than 1.54 for rural students, where the larger values convey the idea that the parents were more likely to frequently take their children to visit museums, zoos, and theatres. Generally, urban parents provide more cultivation than those from rural areas. Group Differences Analyses For testing hypothesis 1, academic success is measured by students' GPA scores. Our findings show that the average GPA of urban students is 3.57, and the average GPA of rural students is 3.62. The t-test shows no statistically significant difference between rural and urban students in academic success (t = 1.263, df = 1570, p = .207). Social success is measured as a binary categorical variable indicating whether or not a student is a leader of student organizations . There are similar proportions of urban student leaders (53%, n = 592) and rural student leaders (49%, n = 236) participating in activities of student organizations. The chisquare test shows no statistically significant difference between rural and urban students as a leader in student organizations (χ2 = 2.632, df = 1, p = .105). Considering the different nature of student bodies in terms of importance for social and cultural capital accumulation as well as criteria for admission, we further classified student organizations into two types, namely, official and non-official. The official student organization refers to those connected with the university administration or party apparatus, for example, the student union and Youth League Committee, which have higher criteria for recruitment, are important for profile building, and predict employment upon graduation. In this study, 36% of urban students (n = 396), while 23% of rural students (n = 110) are leaders in official student organizations. The chi-square test shows a statistically significant difference between rural and urban students as being a leader in official student organization (χ2 = 28.55, df = 1, p = <.001). The non-official student organizations include less formal and self-regulated organizations such as football clubs, comic and animation clubs, and so on so forth. The recruitment of non-official student organizations was more based on personal interests and less important for profile building. In our sample, there were 241 urban student leaders (21.9%), while 124 rural student leaders (25.8%) were in non-official student organizations. The chi-square test shows no statistically significant difference between rural and urban students as a leader in non-official student organizations (χ2 = 3.252, df = 1, p = 0.714). It is worth noting that some students were leaders in both official and non-official student organizations. Therefore, the number of leaders in official (i.e., 396 urban students) plus in non-official (i.e., 241 for urban students) student organizations were larger than the number of leaders in any student organizations (i.e., 592 for urban students). Regression Analyses This study used multiple linear regression to examine whether parental involvement, socioeconomic background, and senior secondary school background predict academic success or not. In order to mitigate post-treatment bias (Montgomery et al., 2018), several covariates, especially demographic background variables were included in the analysis. The key/non-key school indicator was removed due to the low variance. Geographical regions of family location and mother's education were removed due to the violation of the collinearity assumption of regression. The influence of different variables is examined by adding them to the model one at a time, and the model satisfied all regression assumptions. In general, 10.3% of the variance of students' GPAs is explained by rural/urban origins, demographic background, parental involvement, SES, and senior secondary school background (see in table 6.). Ethnicity, gender, parental involvement, socioeconomic background, senior secondary school background are statistically significant predictors of the university GPA, after controlling for other variables. Specifically, Han Chinese (ethnic majority group) students have higher GPAs than ethnic minority students (b = .158, p = .027). Likewise, male students have lower GPAs than female students (b = -.344, p < .001). Regarding parental involvement, all three indicators, human capital cultivation (b = -0.54, p=.038), social capital cultivation (b = -.058, p = .010), and cultural capital cultivation (b = .069, p = .018) can predict the academic success. In other words, if the students received more human or social capital cultivation, they had lower GPAs. The results suggest a unique pattern of school involvement by Chinese parents. Parents may choose to intervene in their children's learning process when the children experience learning difficulties. The result suggests that if the students received more cultural capital cultivation, they had higher GPAs. Regarding SES, father's occupation and father's education predict academic success. For students whose fathers were managers and civil servants, professionals, and clerks, there were no statistically significant differences. For students whose fathers were workers and peasant farmers, their GPAs were relatively lower than (b = -.1804, p = .042) for students whose fathers were managers and civil servants. Students whose fathers graduated from primary school (b = -.196, p = .017) or senior secondary school (b = -.121, p = .013) had lower GPAs than students whose fathers graduated from college. The students whose fathers graduated from junior secondary school had similar GPAs (b = -.068, p = .251) compared with the students whose fathers graduated from college. In terms of the senior secondary school background, for the students who attended province-level key school, city-level key school or country-level key school, their GPAs are very much alike. However, students who attended ordinary schools had lower GPAs, compared to students who attended province-level key schools. The overall quality of the school (b = .020, p = .532) and peer support for academic effort (b = .029, p = .388) were not good predictors of academic success. In this study, we used two logistic regression analyses to examine whether or not parental involvement, socioeconomic background, and senior secondary school background are predictors of social success, while controlling for demographic background. In the first logistic regression model (Model 1 in Table 7.), the outcome variable is whether or not a student is a leader in either an official or non-official student organization (1 = being a leader, and 0 = not being a leader). The results show that all variables (ethnicity, gender, single child or not, rural or urban, parental involvement, socioeconomic background, and senior secondary school background) together explain 3.3% of the variance of being a student leader. In this model, ethnicity and parental involvement are statistically significant predictors, after controlling other variables. Specifically, Han Chinese majority students have a greater chance to be a leader than ethnic minority students (Exp(B) = .532, Wald(1) = 2.418, p = .016) in student organizations. Regarding parental involvement, only social capital cultivation is a predictor of student leadership (Exp(B) = .159, Wald(1) = 2.304, p = .022). In other words, the students who received more social capital cultivation were more likely to receive leadership roles in student organizations. In the second logistic regression model (Model 2 in Table 7.), the outcome variable is whether a student is a leader in an official student organization (1 = being a leader, and 0 = not being a leader). The results show that all variables together explain 8.5% of the variance of being a leader in an official student organization. After controlling for other variables, ethnicity, gender, parental involvement, and socioeconomic background are predictors of being a student leader in official student organizations. Particularly, Han Chinese majority students have a greater chance of being put in leadership roles than can ethnic minority students (Exp(B) = .773, Wald(1) = 2.386, p = .017) in official student organizations. Male students have a lower chance of becoming a leader than female students (Exp(B) = -.333, Wald(1) = 2.431, p = .015). Regarding parental involvement, only cultural capital cultivation is a predictor of student leadership in official student organizations (Exp(B) = .321, Wald(1) = 3.178, p = .001). In other words, the students who received more culture capital cultivation were more likely to be a leader in official student organizations. Concerning socioeconomic background, all three indicators, father's occupation, father's education, and family financial status predict being a leader in official student organizations. For students whose fathers were managers, civil servants, or clerks, there is no statistically significant difference in terms of the chance of being a leader in official student organizations (Exp(B) = .053, Wald(1) = 2.789, p = .779). For students whose fathers were professionals (Exp(B) = .464, Wald(1) = 2.320, p = .021) or working class and peasant farmers (Exp(B) = .849, p = .031), the chance of being a leader in official student organizations was lower than students whose fathers were managers and civil servants. The students whose fathers graduated from senior secondary school (Exp(B) = -1.013, Wald(1) = 2.666, p = .008) were less likely to be a leader in official student organizations than students whose fathers graduated from college. Nevertheless, there is no statistically significant difference regarding being a leader in official student organizations among students whose fathers graduated from college, junior secondary school (Exp(B) = -.101, Wald(1) = 0.419, p = .638), or primary school (Exp(B) = -.128, Wald(1) = 0.739, p = .459). Conclusion While policies to prepare rural students for the national college entrance examination and other preferential policies for rural and ethnic minority students have been helpful in gaining access to selective universities, the debate continues about the extent of access and equity in China's HPS (Liu, 2012;Mok, 1999). As Chinese society becomes increasingly stratified coupled with a tradition of elitism, there is an intensified competition for social mobility and the admission to top-tier universities. This provides theoretical reasoning for expecting under representations of rural students from more disadvantaged backgrounds gaining admission and succeeding in top-tier universities. It is also reasonable to expect that their integration into an elite university environment may be more problematic. The USES survey of the freshmen cohort of four elite universities in the autumn of 2013 cast a light on who most benefits from the recent policies aimed at enhancing access and equity in higher education. Our analysis provides an empirical foundation for discussion. The structural dimensions of family influence confirm a disadvantaged position for rural students in access to the most selective universities in China. It also confirms an advantaged position that upper-and middle-class backgrounds students from rural areas have access to selective universities. The data emphasize the role of parenting among the expanding Chinese middle class, a parenting style increasingly characterized by intensive investment in cultural capital. For example, rural students are less likely than their urban counterparts to succeed socially when measured by their memberships of official student bodies, something that is predictable by parental involvement in the creation of cultural capital. Contrary to studies in Western industrialized countries, our data suggest that rural students perform academically as well as their counterparts from urban areas (Gao, Liu & Fang, 2015;Li, Hou & Wen, 2015). Two reasons explain. First, the national college and university entrance examination (gaokao) is the single, often only, determinant of admission to a top-tier university. To a great extent, it excludes other admission criteria such as high school achievement, other talents and hobbies, family donations, and legacy admission. Second, an increasing number of the more well-off rural parents have the resources and parenting perspectives to position their children into an advantageous position for admission to selective universities. Once on campus, rural students earn grades comparable with their urban counterparts (Author, 2018). This paper highlights the analysis and importance of the process dimension of family influence in access and equity to China's elite universities. Early parental involvement translated to cultural capital for both academic and social success in top-tier universities. This is especially central to understanding the structuring feature of the post-socialist state. As a new structure is arising, the ways those privileged social groups pass on their advantages to their children are becoming more sophisticated. Upper-and middle-class parents' practices in cultural capital investment are the results of the intensive status competitions among households who are affected by a neo-liberal discourse on individual responsibilities for their own success (Ball, 2003;. Although there is no clear pattern of a dominant culture, many anxious upper-and middle-class urban parents tend to invest in anything that they think might bring privileges to their children. And, their cultural capital strategies are successfully translated into their children's social success in elite universities (Marginson, 2016). Therefore, the concept of cultural capital has the potential to disenchant the privileges in education that the urban and middle-class have. It helps in revealing the traits of these students from upper-and middle-class families in both urban and rural families that are valued by the elite universities. As Marginson (2016) argues a double competition plays out in the field of higher education: between institutions which compete for status, and between students and their families who compete to obtain credentials to achieve social positions. Chinese higher education is also becoming increasingly stratified; an examination of the vertical differences in success in access to and within higher education adds a dimension to evaluating policy initiatives to raise the number of rural students from underserved communities in Chinese top-tier universities. To broaden rural student participation also remains an aim of policy. Yet, policies generally fail to consider family influences and the differentiated patterns of success in admission to and success within elite universities by socioeconomic differences. Policies to improve the access and equity in higher education should be recalibrated for those students from underserved rural communities. Further, the stratification of higher education is both supply sideand demand side-driven and it leads to fiercer competition among both universities and families. Students who are better ascribed with cultural, economic and social capital are inevitably in an advantaged position while their counterparts self-exclude or underinvest (Chesters & Watson, 2013;Zha, 2017). More effective reforms should involve the supply side of higher education and reverse the trend towards further polarization of higher education institutions in terms of prestige and resource allocation and geographical location. Moreover, this study applied regression analyses as an explanatory model, and future research can examine how latent variable modes (e.g., structural equation modeling) can be incorporated into representing the concepts such as parental involvement and students' socioeconomic background, and evaluating the relationship between academic and social success.
2023-07-11T17:03:06.166Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "490f96d164bfc3ef6e675a53da388c60f6d5c45e", "oa_license": "CCBY", "oa_url": "https://ojs.lib.unideb.hu/CEJER/article/download/12393/11463", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "473c7d552863f61fed144a842ee0a17f59fc7751", "s2fieldsofstudy": [ "Sociology", "Education" ], "extfieldsofstudy": [] }
236166081
pes2o/s2orc
v3-fos-license
Evaluation of the antibody response and adverse reactions of the BNT162b2 vaccine of participants with prior COVID-19 infection in Japan Objective Vaccination programs are important to preventing COVID-19 infection. BNT162b2 is new type of vaccine, and previous studies have shown that the antibody response was significantly elevated in patients with prior COVID-19 infection after the first dose of BNT162b2 vaccination. However, no study has evaluated the efficacy of the vaccination or the adverse reactions of people with prior COVID-19 infection in Japan. The aim of this study is to evaluate the antibody titer and adverse reactions of BNT162b2 vaccine among participants with prior COVID-19 infection in Japan. Methods The data for this prospective study was collected between April 15, 2021, and June 9, 2021. All of the hospital staff who received the BNT162b2 vaccine were included in this study and were sorted into either the prior infection group or the control group. We collected the data of adverse reactions through self-reporting and calculated the anti-SARS-CoV-2 spike-specific antibody titer for all participants. Results The antibody titer of the prior-infection group in first antibody test was significantly higher than that of the control group in the second antibody test. There was no significant difference in adverse reactions between the prior infection group receiving its first vaccination and the control group receiving its second vaccination. Furthermore, the history of prior infection was not related to local and systemic adverse reactions in the multivariate logistic regression analysis. Conclusion Our study shows that the antibody response following the first vaccination in the prior COVID-19 infection group was found to be comparable to that of the second vaccination in the control group; however, the evaluation of adverse reactions was inadequate and further, large-scale studies are needed. Introduction systemic adverse reactions. Odds ratios and corresponding 95% confidence intervals were calculated. 145 A p-value of less than 0.05 was considered to indicate statistical significance. Data were analyzed 146 with the Statistical Package for the Social Sciences, version 26.0 (SPSS, Chicago, IL, USA). 147 148 Results 149 Overall, 501 participants met the inclusion criteria. However, 96 participants were excluded because 150 they did not provide their consent to the study, one participant was excluded because of a new 151 COVID-19 infection after vaccination, and twenty-one participants were excluded because they did 152 not provide a blood sample within the deadline. Finally, 383 (76.4%) participants, who were divided 153 into a nine-person prior COVID-19 infection group and a 374-person control group, were analyzed 154 (Fig 1). 155 The data of the antibody titer of the second antibody test in the prior COVID-19 infection group was 156 missing for one case, and the data of adverse reactions was missing for two cases in the prior 157 COVID-19 infection group and for fifteen cases in control group. 158 The baseline characteristics for the prior COVID-19 infection group and control group are 159 demonstrated in Table 1. The median age (interquartile range) of the participants was younger in the 160 prior COVID-19 infection Group than in the Control Group (26 (16.0) vs 36 (16.0): p = 0.12), but 161 there were no other significant differences. The proportion of males was slightly smaller in the prior 162 COVID-19 infection group, but the difference was not significant (p = 0.46). Moreover, there was no 163 obesity and only a few participants with a previous medical history in the prior COVID-19 infection 164 group. 165 The comparison of anti-SARS-CoV-2 spike-specific antibody response between the prior COVID-19 166 infection group and the control group is demonstrated in Fig 2. The log anti-SARS-CoV-2 167 spike-specific antibody titers of the prior COVID-19 infection group were higher than that of the 168 control group in the pre-vaccination, first antibody, and second antibody tests (p < 0.001). The log 169 antibody titer of the prior COVID-19 infection group in the first antibody test was significantly 170 higher than that of control group in second antibody test (p < 0.001), whereas there was no 171 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted July 22, 2021. ;https://doi.org/10.1101https://doi.org/10. /2021 was larger than that of the first vaccination in the control group (p < 0.001). 179 The proportion of adverse reactions for each day after vaccination is demonstrated in Fig 3. The 180 injection site symptoms and headaches were confirmed in both groups four days after the first and 181 second vaccinations. Fever was confirmed in the early days after the second vaccination in both 182 groups. Fatigue was confirmed after four days of the first and second vaccination doses in the control 183 group, whereas that was confirmed in the early days after both doses in the prior COVID-19 184 infection group. Myalgia and arthralgia were confirmed after four days of the first and second 185 vaccination in the control group, whereas they were confirmed only within three days after receiving 186 the second dose of the vaccination in the prior COVID-19 infection group. 187 The total days of adverse reactions is recorded in Fig 4. The length of the days of systemic symptoms 188 was approximately half the length of the local symptoms, and the mean of days was shorter than 1.5 189 days. The total days of fever and fatigue after the second vaccination was significantly longer than 190 after the first vaccination in both groups (p < 0.05, p < 0.001). In the control group, the total number 191 of days of myalgia and arthralgia was significantly longer after the second dose of the vaccination 192 than that after the first vaccination (p < 0.001), whereas the total number of days of myalgia and 193 arthralgia was longer after the first vaccination than after the second vaccination in the prior 194 COVID-19 infection group. 195 Table 3 In our study, the antibody titer dramatically increased in the prior COVID-19 infection group after 203 first vaccination among Japanese people. This is the first study that evaluated the antibody response 204 after administration of the BNT162b2 vaccine in the prior COVID-19 infection group in Japan. 205 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted July 22, 2021. ; https://doi.org/10. 1101/2021 According to previous studies, the median antibody titers after the first dose of vaccination was 206 significantly higher in participants who had been previously infected with the COVID-19 virus than 207 that in uninfected participants [5,6]. A study carried out by Ebinger et al. shows that the antibody 208 titer reached a plateau after the first dose of the vaccination and that there was no significant 209 difference of antibody titers after the first dose of the vaccination and after the second dose of the 210 vaccination [6]. In our study, the antibody titer of the prior COVID-19 infection group reached a 211 plateau after receiving first dose of the vaccination, and it was significantly higher than that of the 212 control group after receiving the second dose of the vaccination as demonstrated in previous studies. 213 In contrast to the results of previous studies, in our study, the antibody titer of the prior COVID-19 214 infection group after receiving the first dose of the vaccination was significantly higher than that of 215 the control group after receiving the second dose of the vaccination. 216 Based on the results of previous studies, age and gender can be considered to be factors that 217 increased the antibody titer. A study carried by Gustafson et al. showed that the immune response to 218 vaccination is controlled by a delicate balance of effector T cells and follicular T cells and aging 219 disturbs this balance. Multiple changes in T cells have been identified as contributing to the 220 age-related defects of post-transcriptional regulation, metabolic function, and T-cell receptor 221 signaling [8]. In this study, there was no significant difference in age between the prior COVID-19 222 infection group and the control group, but the median age was higher in the prior COVID-19 223 infection group, which may have increased the antibody titer. 224 Differences in sex hormones are associated with gender differences in vaccine-induced immunity. 225 For example, testosterone levels and the antibody titer of the influenza vaccine have been shown to 226 be inversely correlated [9][10][11]. Genetic differences, as well as sex hormone differences, affect 227 vaccine-induced immunity. The X-chromosome expresses ten times more genes than the 228 Y-chromosome, and the differences in gene expression between the X-and Y-chromosomes promote 229 differences in vaccine-induced immunity by gender [12]. In contrast to this widely held belief, in our 230 previous study, gender was not a significant factor in the differences in antibody titers, and it is 231 possible that gender differences did not contribute much to the increases in antibody titers in this 232 study as well [7]. 233 The antibody titer level does not necessarily reflect the immune function against the BNT162b2 234 vaccine as a whole. However, our study showed that prior COVID-19 infection may increase the 235 immune response after receiving the first dose of the vaccination, and this fact is especially important 236 for vaccine delivery systems in Japan. 237 There are very few studies that evaluate the adverse reactions of the BNT162b2 vaccine. The 238 proportion of adverse reactions to the first and second doses of the vaccination in the control group 239 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted July 22, 2021. ; https://doi.org/10. 1101/2021 was similar to previous studies [4,13]. Few studies have shown the adverse reactions to receiving the 240 first and second doses of the vaccination with and without prior 14,15], and 241 they have described the number of days and cases after vaccination, but the range of days were two 242 days or fewer, two to seven days, and seven days or more. On the other hand, the number of adverse 243 reactions were recorded on a daily basis up to the seventh day after vaccination in our study. The 244 trend of fever and fatigue in our study was similar to that of the previous study, whereas 245 injection-site symptoms after the first and second vaccinations in both groups, headache in the prior 246 COVID-19 infection group after both doses, and myalgia and arthralgia after the first vaccination in 247 the prior COVID-19 infection group lasted longer than those of previous study [6]. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted July 22, 2021. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted July 22, 2021. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted July 22, 2021.
2021-07-22T19:06:09.534Z
2021-07-22T00:00:00.000
{ "year": 2021, "sha1": "cda2acbc57a682bf2c47065e45a72f6a90de4e8e", "oa_license": "CCBY", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/07/22/2021.07.18.21260579.full.pdf", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "cda2acbc57a682bf2c47065e45a72f6a90de4e8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118370448
pes2o/s2orc
v3-fos-license
The higher dimensional gravastars A new model of gravastar is obtained in $D$-dimensional Einstein gravity. This class of solutions includes the gravastar as an alternative to D-dimensional versions of the Schwarzschild-Tangherlini black hole. The configuration of this new gravastar consists of three different regions with different equations of state: [I] Interior: 0 \leq r<r_1, \rho = -p; [II] Shell: r_1 \leq r<r_2, \rho = p; [III] Exterior: r_2<r, \rho = p =0. The outer region of this gravastar corresponds to a higher dimensional Schwarzschild-Tangherlini black hole. efforts in constructing a non-singular gravastar as an alternative to (2 + 1) black holes. These efforts were inspired by an earlier attempt by us [3], wherein we had constructed a compact astrophysical charged object, a (3 + 1)-dimensional charged gravastar, as an alternative to charged black holes. However, present solution for the proposed astrophysical object of higher dimensional gravastars is found to be singular at its origin, which is a point of worry. Thus, there is a pertinent need to understand the subject from the very basics of cleaner (2 + 1)-dimensional gravity and develop the subject of these gravitational vacuum stars starting from (2 + 1)-dimensions to higher dimensions. In connection to de Sitter spacetime and black holes a series of works are available in the literature [4,5,6,7,8,9,10,11]. These investigations are interesting in the sense that the authors have analyzed the globally regular solution of the Einstein equations describing a black hole whose singularity is replaced by the de Sitter core. However, in our present study we extend the proposition of charge free gravastars of Mazur and Mottola [12,13] to a charged compact object. While doing so, we invoke the very idea of electromagnetic mass (EMM) which suggests that interior de Sitter vacuum of a charged gravastar generates gravitational mass [14,15,16,17]. This provides a stable configuration by balancing the repulsive pressure arising from charge with its alternative gravity to avert a singularity. It is a common trend to believe that the 4-dimensional present spacetime structure is the self-compactified form of manifold with multidimensions. Therefore, cosmic string as well as superstring theories and hence M-theory which reproduce higher dimensional general relativity at low energy, argued that theories of unification tend to require extra spatial dimensions to be consistent with the physically viable models [18,19,20,21,22,23]. The classical analogue of the effective String Theory is the low energy effective action containing squares and higher powers of curvature terms. Also, similar higher derivative gravitational terms appear in the renormalization of quantum field theory in curved space background. Further, it is shown that some features of higher dimensional black holes differ significantly from four dimensional black holes as higher dimensions allow for a much richer landscape of black hole solutions that do not have 4-dimensional counterparts [24]. It draws more interest due to (1) a conceivable possibility of the production of higher dimensional black holes in future colliders in the scenario of large extra dimensions and TeV-scale gravity [25,26], and (2) The AdS/CFT correspondence which relates the possibility of a D-dimensional black hole with those of a quantum field theory in (D − 1)-dimensions [27]. In fact, the study of higher dimensional black holes have gained momentum in the first decade of this millennium. As in the present paper we are considering gravastar as an alternative to black holes so it is reasonable to adopt higher dimensional gravastar due to importance of higher dimensional black holes. Therefore, we present our study of higher dimensional gravastars proposed as an alternative to higher dimensional Schwarzschild-Tangherlini black holes [28]. We develop mathematical framework for these gravastars and obtain solutions for its three separable regions; the interior, the shell and the exterior. We then study proper length and energy, entropy and junction conditions in detail. The results and discussions have been presented in every section under various headings and subheadings. At the end, we conclude our findings. Interior space-time Since we are exploring for higher dimensional gravastar, we have assumed a D-dimensional spacetime with the structure R 1 XS 1 XS d (d = D − 2), where S 1 is the range of the radial coordinate r and R 1 is the time axis. For this purpose, let us consider a static spherically symmetric metric in D = d + 2 dimension as The notation, dΩ 2 d is a linear element on a d-dimensional unit sphere, parametrized by the angles φ 1 , φ 2 , ......, φ d : The Hilbert action coupled to matter is given by where R D is the curvature scalar in D-dimensional spacetime, G D denotes the D-dimensional Newton constant and L m is the Lagrangian for matter distribution. We obtain the following Einstein equation by varying the above action with respect to the metric as where G D ab denotes the Einstein's tensor in D-dimensional spacetime. The interior of the star is assumed to be perfect fluid type and can be given by where, ρ represents the energy density, p is the isotropic pressure, and u i is the D-velocity of the fluid. The Einstein field equations for the metric (1), together with the energy-momentum tensor given in Eq. where a '′' denotes differentiation with respect to the radial parameter r. Here we have assumed c = 1 in geometrical unit. Conservation equation in Following Mazur-Mottola [12], we assume the Equation of State (EOS) for the interior region in the form Using this EOS, one gets from Eq. (8) We write this constant as, This means that in the interior we are essentially considering the Cosmological Constant i.e. vacuum energy density of Einstein [29,30]. Therefore, pressure may be expressed as follows Using Eq. (9) one gets the solutions of λ from the field Eq. (5) as given below where E is an integration constant. Since d > 2 and the solution is regular at r = 0, so we demand E = 0. Using Eq. (9) one may obtain from Eqs. (5) and (6), the following relation where ln C is an integration constant. Thus we have the following interior solutions We then calculate the active gravitational mass M (r) in higher dimensions, which is found to be This is the usual gravitating mass for a d-dimensional sphere of radius R and energy density ρ c . The space-time metric thus obtained turns out to be free from any central singularity. 5 Exterior space-time The exterior region defined as (p = ρ = 0) in higher dimensions is nothing but a generalization of Schwarzschild solution, which as obtained by Tangherlini [28] reads as Here µ = 16πG D M/Ω d is the constant of integration with M , the mass of the black hole and Ω d , the area of a unit d-sphere as Shell It is assumed that thin shell contains ultra-relativistic fluid of soft quanta which obeys the EOS p = ρ. This represents stiff fluid model of Zel'dovich type in connection to cold baryonic universe [30]. It is difficult to obtain a general solution of the field equations in the non-vacuum region, i.e. within the shell. We try to find an analytic solution within the thin shell limit, 0 < e −λ ≡ h << 1. As an advantage of it, we may set h to be zero to the leading order. Under this approximation, the field Eqs. (5) -(7), with the above EOS, may be recast in the following form Integration of Eq. (18) immediately yields where E is an integration constant. The range of r lies within the thickness of the shell [r 1 = R, r 2 = R + ǫ]. We, under the condition ǫ << 1, get E << 1 as h << 1. The other metric coefficient, ν, can be found as where r 0 is an integration constant. Also, from the conservation equation and using the same EOS as above, one may obtain ρ 0 being an integration constant. As ρ ∝ r 2d , so the ultra relativistic matter in the shell (r 1 ≤ r < r 2 ) is more dense at the outer boundary (r 2 < r) than in the inner boundary (0 ≤ r < r 1 ). 6 Fig. 1 The variation of pressure and density of the ultra relativistic matter in the shell against r for different dimensions Proper length and Energy We consider matter shell is situated at the surface r = R, describing the phase boundary of region I. The thickness of the shell (ǫ << 1) is assumed to be very small. Thus the region III joins at the surface r = R + ǫ. Now, we calculate the proper thickness between two interfaces i.e. of the shell as By solving the above equation, one gets where a = 1 2(d−1) . It will be interesting to calculate the energy E within the shell, which we find out as However, one may write the energy E within the shell up to first order in ǫ as Rǫ. (26) Fig. 2 The variation of proper length within the shell against r for different dimensions We observe that the energy within the shell is not only proportional to ǫ in first order of thickness but also depends on dimension d of the spacetime. Entropy We calculate the entropy following Mazur and Mottola prescription [12] as Here, s(r) stands for the entropy density of the local temperature T (r), which may be written as where α 2 is a dimensionless constant. Thus the entropy of the fluid within the shell could be found as Solving the above equation, one gets Fig. 3 The variation of Entropy within the shell against r for different dimensions Junction Condition The gravastar configuration contains three regions in which interior region I is connected with exterior region II at the junction interface i.e. at the shell. This makes a geodecically complete manifold with a matter shell at the surface r = R. Thus a single manifold characterizes the gravastar configuration. According to fundamental junction condition there has to be a smooth matching between the regions I and III of the gravastar. However, though the metric coefficients are continuous at the junction surface (S) their derivatives may not be continuous there. Thus affine connections may be discontinuous at the boundary surface, in other words, the second fundamental forms [31,32,33,34,35,36,37] where, n ± ν are the unit normals to S and can be written as which are associated with the two sides of the shell are discontinuous. In Eq. (29), ξ i are the intrinsic coordinates on the shell and f (x α (ξ i )) = 0 is the parametric equation of the shell S. Here, − and + mention interior and exterior regions. These discontinuity of the second fundamental forms, produce intrinsic stress energy tensor within the shell. Using Lanczos equations [38,39,40,31,41,42], one can write the surface intrinsic energy momentum tensors, is the surface energy density and is the surface tension. For our gravastar configuration, we calculate (37) We see that the energy density as well as surface tension of the junction shell are negative. This means we have a thin shell of matter content with negative energy density. It is to be noted that the discontinuity of the affine connections at the region II i.e. in the shell provides the above matter confined within the shell. Such a stress-energy tensor is not ruled out from the consideration of Casimir effect between compact objects at arbitrary separations [43]. The above negative surface tension also indicates that there is a surface pressure as opposed to surface tension. Thus, in principle, the shell of our gravastar configuration consists of a combination of two types of matter distributions, namely, the ultra-relativistic fluid obeying p = ρ and matter components due to discontinuity of second fundamental form of the junction interface, that are given in Eqs. (36) and (37). We demand that these two fluids are non-interacting and characterize the shell of the gravastar i.e. non-vacuum region II. Concluding remarks In the present work we generalize the concept of gravastar, a gravitational vacuum star, in the spacetime of 4-dimensional to Ddimensional Einstein gravity of the Schwarzschild-Tangherlini category black hole. To do so, firstly, we have considered three different regions with different EOS such as [I] 0 ≤ r < r 1 , ρ = −p (Interior), [II] r 1 ≤ r < r 2 , ρ = p (Shell) and [III] r 2 < r, ρ = p = 0 (Exterior). Secondly, the conjecture of electromagnetic mass (EMM) has been invoked due to the presence of charge. Originally Lorentz [14] proposed model for extended electron and conjectured that "there is no other, no 'true' or 'material' mass," and thus provides only 'electromagnetic masses of the electron'. Wheeler [15] and Wilczek [17] also argued that electron has a "mass without mass". Feynman, Leighton and Sands [16] termed this type of models as "electromagnetic mass models". Following the idea of EMM, where all the physical parameters, including the gravitational mass, are arising from the electromagnetic field alone, have been extensively studied by several investigators [44,45,46,47,48,49,50,51,52] under the general relativistic framework where spacetime geometry is assumed to be associated with the presence of charged particle obeying Maxwell's equations of electromagnetic theory. However, in connection to the interior configuration I of EMM we would like to record that most of the above investigators exploit an EOS with a repulsive pressure of the form p = −ρ which is a very common feature in the context of the present accelerating Universe and have been argued to be connected with Λ-dark energy [53,54,55,56]. The EOS of this type implies that the matter distribution under consideration is in tension and hence the matter is known in the literature as a 'false vacuum' or 'degenerate vacuum' or 'ρ-vacuum' [57,58,59,60]. This EOS was first discussed by Gliner [61] in his study of the algebraic properties of the energy-momentum tensor of ordinary matter through the metric tensors. Later on it was revealed that the gravitational effect of the zero-point energies of particles and electromagnetic fields are real and measurable, as in the Casimir Effect [62]. Whereas in connection to the shell configuration II it is to note that the stiff fluid model, which refers to a Zel'dovich universe, have been employed by several authors for various situations such as cold baryonic universe [30], early hadron era [63], scalar field fluid [64] and LRS Bianchi-I cosmological models [65]. There are also recent applications and claims for stiff fluid EOS in the various astrophysical systems like neutron star RX J1856-3754 [66], hyperon stars [67] and structure formation [68]. As a final remark we would like to add here that our sole aim in the present work was to find a classical analogue of the higher dimensional gravastar as an alternative to black holes and it seems that we are quite successful in our attempt.
2014-06-03T10:44:38.000Z
2012-09-24T00:00:00.000
{ "year": 2015, "sha1": "1073801a59859621ef78836ab48b6144618b07de", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1209.6291", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1073801a59859621ef78836ab48b6144618b07de", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
56006437
pes2o/s2orc
v3-fos-license
Effects of Different Warm-up Protocols on Leg Press One Repetition Maximum Performance In order to investigate the effects of different warm-up protocols on one repetition maximum (1RM) leg press performance, 23 rowers (age 21.48±3.12 years, height 185.17±8.22 cm, body mass 83.86 ±8.7 kg.) completed 1RM leg press tests after four different general warmup conditions with a standardized specific warm-up. The workloads of the warm-up protocols were individually designed according to the results of the incremental maximal rowing ergometer test that applied initially. The duration of the protocols were fixed as 15 minutes (min.) for each participant, but there were differences in the intensity of the warm-up. In statistical analysis, warm-up conditions were set as fixed factor while participants as a random factor. Tukey post hoc test was employed whenever a significant difference was found. A probability level of 0.05 was established to determine statistical significance. All statistical analyses were conducted using SPPS version 20. As a conclusion, approximately 4% higher 1 RM results were obtained after low intensity (40% of VO2Max) protocols which contain two intermittent sprints that last 15 seconds in the last 5 min. of the protocol. Thus, the results of the present study are important for both practical and research environments. Introduction Strength is one of the most important predictors of the performance in body-weight supported sports such as rowing and canoe-kayak (Akca and M uniroglu, 2008;Gee et al., 2011;M cKean and Burkett, 2014). Because of the demonstrated relationships between strength measures and rowing performance, strength training appears to be an essential part of the training programs of rowers (Gee et al., 2011;M cNeely et al., 2005). Testing of one repetition maximum (1RM ) and designing the training p lan according to the test results are an essential part of an athletic preparation (Baechle and Earle, 2008). 1RM test is the most common measure to assess the strength level of an athlete and the accuracy of this test is crucial to determine individual training loads precisely (Brown and Weir, 2001). High-reliability values were reported (intra-class correlations between 0.82-0.99) for maximal strength tests involving leg pressing and arm pulling in rowers (Lawton et al., 2011a). Dynamic lower body strength tests that determined the maximal external load for a 1RM leg press (kg), isokinetic leg extension peak force (N) or leg press peak power (W) proved to be associated with 2000m ergometer times (r = -0.54 to -0.68; p < 0.05). (Lawton et al., 2011b;Lawton et al., 2012;Lawton et al., 2013) The warm-up procedure (type of the exercise, stretching, specific activity) is among the factors that affect the precision of the 1RM strength tests (Bishop, 2003a;Bishop, 2003b;Brown and Weir, 2001;Woods et al., 2007). It is generally recommended that the warm-up before maximum strength testing should contain both general aerobic and specific (task related, mimicking) exercises (Bishop, 2003b;Brown and Weir, 2001;Pescatello, 2014). The main aim of the general warm-up exercises is to increase body temperature, whereas the specific warm-up targets to increase neuromuscular activation (Bishop, 2003b;Brown and Weir, 2001;Gourgoulis et al., 2003;Pescatello, 2014). Recent studies demonstrated the beneficial effects of longer (15 min) duration general warm-ups over a shorter duration (5-10 min) on 1 RM strength performance (Barroso et al., 2013;Stewart et al., 2003). Besides, in a study that conducted on a state level sprint kayakers, significantly better 500-m kayak ergometer performances were demonstrated after the warm-up that includes short (10 seconds) supramaximal (200% of VO 2Max ) intervals compared with continuous, constant load warm-up (Bishop et al., 2003). One of the aims of the present study is to investigate whether the addition of intermittent high force movements into a warm-up improves 1RM strength performance. Leg press is selected to study in this research because it is one of the most common exercises to develop lower body strength and frequently used in the training programs of rowers, besides significant relationships between 1RM leg press scores and rowing performance were demonstrated in previous studies (Akca, 2014;Chun-Jung et al., 2007;Yoshiga and Higuchi, 2003). The purpose of this study was to compare the effects of different general warm-up protocols on leg press 1 RM performance. Materials and Methods To demonstrate the effectiveness of different general warm-up protocols on leg press 1RM performance, subjects were tested in four different conditions. Initially, subjects performed 2000-m time trial and maximal incremental exercise test on rowing ergometer in order to determine the power values that used for the warm-up protocols. In a randomized crossover fashion, 1RM leg press performance was measured in four different occasions. Different general warm-up protocols were used for each occasion. After general warm-up, subjects were instructed to rest for 3 minutes and performed the specific warm-up protocol that standardized for all conditions. HR, RPE, and Lactate were measured before and immediately after each warm-up session. Subjects were asked to refrain from caffeine, alcohol and strenuous exercise for 48 hours before tests. Besides, subjects were instructed to keep a diary of dietary intake on the day before tests and the same dietary intake was replicated in the following tests. Tests were conducted at least 48 hours apart and approximately at the same time of the day (± 1 hr) for each subject. 1RM strength scores would be expected to be at its highest during the specific training period and can be reduced because of the altered training focus during the competitive period (García-Pallarés et al., 2009). The tests were conducted at the end of the general preparation period of the yearly training plan. Subjects Twenty-three male collegiate rowers (age 21.48 ± 3.12 years, height 185.17±8.22 cm, body mass 83.86 ± 8.7 kg, mean 2000 m. time= 394.4 ± 11.5 seconds) volunteered to participate in this study. All subjects were trained, experienced and performed on the model of ergometer used for the measurements and they also have at least 34 months of strength training experience (40.4 ± 5.8 months) and performed leg press exercise during their regular training routine at least twice a week. The study was conducted in accordance with the ethical principles of the Declaration of Helsinki and approved by the institutional human research ethics committee. After reading a sheet that contained information with regard to the study design and any possible risks, all subjects signed an informed consent form. m. Time Trial Test An air-braked rowing ergometer (model D, Concept II, M orrisville, VT, USA) was used for rowing performance measures. Drag factor setting of the ergometers was adjusted to 140 as recommended by Amateur Rowing Association (ARA) for heavyweight men rowers (O'Neill and Skelton, 2004). For the time trial test, subjects were asked to perform an all-out 2000-m. on ergometer. Heart rate (HR) was recorded with a telemetric HR monitor throughout the test (s610i, Polar Electro Oy, Finland). Completion time, stroke rate, HR and average power outputs were recorded immediately after the test for the whole test and every 500 m splits separately. Incremental Exercise Test To determine the metabolic responses to loading, incremental rowing ergometer test recommended by Australian Institute of Sport (AIS) was executed (Hahn A et al., 2000). The test protocol was discontinuous with progressive 4 min increments, consisting of s ix submaximal stages and a final (7th) maximal stage. Each stage was separated by one-minute recovery interval during which blood samples for lactate analyses were taken from earlobe. The workloads for the submaximal stages were determined individually based on each subject's best time during 2000 m. time trial test. The average 500 meters pace of the 2000 meters maximal test time plus four seconds was calculated, to give the pace (per 500 m) that the subject was required to maintain during the sixth stage of the test. Successive amounts of 6 seconds per 500 m were added in order to calculate the required pace for the earlier workloads. The final (7th) stage performed with 4 min. maximal effort. Verbal encouragement was given in the final stage of the test. Gas exchange during the test was measured breath by breath with a gas analysis system (Oxycon M obile, Jaeger GmbH, Germany). HR was recorded during test via the sensor of gas analyser using T-31 coded transmitter belt (Polar Electro OY, Finland). Blood lactate concentrations were measured using an automated lactate analyser (YSI Sport 1500, Yellow Springs, Ohio, USA). The rating of perceived exertion (RPE) was assessed before and after each stage (Borg, 1982). Lactate and gas analysers were calibrated prior to tests according to manufacturer's instructions. VO 2 values were averaged over 15-second intervals and VO 2Max was determined by averaging the four highest consecutive oxygen consumption value recorded during the last stage of the test. Familiarization 1 RM Leg Press Sessions Subjects performed a familiarization session before undertaking any of the warm-up protocols in order to optimize the effectiveness of the specific warm-up and testing application. The individual settings of the leg press machine (Diesel Fitness, Florida, USA) were recorded during the familiarization session and replicated during the 1 RM test. Subjects performed self-selected warm-up for 5 min before the session (Barroso et al., 2013). Warm-Up Protocols The protocols were performed on the same ergometer used for the maximal incremental exercise test. Beneficial effects of longer (15 min) duration general warm-ups over shorter duration (5-10 min) on 1RM performance have been demonstrated (Barroso et al., 2013 ;Stewart et al., 2003). Therefore, 15 min was chosen as the duration of each protocol. Even though the duration of each warm-up protocol was the same, there were differences in the intensity of each condition. The combinations were as follows: 1. Constant Low Intensity (CLI): 15 min at the power output that corresponded to 40% of VO2 Max. 2. Low-Frequency Intermittent (LFI): 13 min at the power output that corresponded to 40% of VO2 Max and two 15 seconds sprints with the intensity equivalent to 170% of the power output at VO2 Max during the last 2 min, each separated by 45 seconds of recovery at 40% of VO2 Max. 3. M oderate Frequency Intermittent (M FI): 10 min at the power output that corresp onded to 40% of VO2 Max and five 15 seconds sprints with the intensity equivalent to 170% of the power output at VO2 Max during the last 5 min, each separated by 45 seconds of recovery at 40% of VO2 Max. 4. High-Frequency Intermittent (HFI): 5 min at the power output that corresponded to 40% of VO2 Max and ten times 15 seconds sprints with the intensity equivalent to 170% of the power output at VO2 Max during the beginning of every min. in the last 10 min. Subjects were only allowed to perform light short-duration submaximal stretching exercises during the warm-up because the negative effects of extensive stretching exercises on strength performance were demonstrated in various studies (Bacurau et al., 2009;Costa et al., 2014;Rubini et al., 2007). 1RM Leg Press Test After completion of each warm-up protocol, subjects were instructed to rest for three min. After the rest, subjects performed the same specific warm-up regardless of their general warm-up protocol. The specific warm-up consisted, one set of eight repetitions and one set of three repetitions of leg press at 50% and 70% of the familiarization session leg press performances respectively with 2 min. rest interval. Three min. of rest was given after the specific warm-up and subjects had five attempts to achieve the 1RM score. The relief interval between the attempts was three min (Akca, 2014;Barroso et al., 2013). Subjects started the test with the knees fully extended and feet were on the footings. Subjects were asked to flex their knees to 90 degrees at the end of the eccentric phase before extension (concentric phase) (Brown and Weir, 2001). A certified strength coach was supervised tests to provide correct movement technique. Data Analysis Normality of the distribution was analyzed using Shapiro-Wilk test. Lactate, HR and RPE values from each warm-up protocol were compared using a mixed model analysis. Warm-up conditions were set as fixed factor with subjects as a random factor. Tukey post hoc test was employed whenever a significant difference was found. A probability level of 0.05 was established to determine statistical significance. All statistical analyses were conducted using SPPS version 20 (SPSS Inc., Chicago, IL). Results M ean maximal oxygen consumption of the subjects was found as 58.1 ± 4.2 ml.kg.min -1 . Lactat e, H R and RP E values w ere not significant ly different at rest betw een warm-up protocol groups (p=0.994). Ŧ Significantly different (p < 0.05) from CLI and HFI. As presented in Table 1; d ifferences in HR, RPE, and Lactate parameters were statistically significant after HFI protocol compared with any other protocol (p=0.003). Lactate values were significantly different between CLI and HFI and other protocols (p=0.002). The differences of the values obtained following LFI and MFI protocols were not significant (p>0.05). As presented in Figure 1; 1RM leg press performance w as higher after LFI and M FI protocols compared w ith others (p=0.002). On the contrary, 1RM values w ere significantly lower when using HF I warm-up protocol (p=0.001). There is a significant difference between the scores of HFI and CLI protocols (p=0.003). No differences w ere detected between LFI and M FI protocols (p>0.05). Discussion The importance of leg strength on rowing performance has been demonstrated in several studies (Baechle and Earle, 2008;Chun-Jung et al., 2007;Gee et al., 2011;Lawton et al., 2011a;M cNeely et al., 2005). Strength can be considered as one of the limiting factors of rowing performance along with the various other factors such as starting power and muscular endurance (Gee et al., 2011). Thus, determining the leg press 1RM performance precisely as possible is crucial to optimize the individual training plan of each rower. According to the results of the present study, 1RM leg press performance was found significantly higher after LFI and M FI warm-up protocols compared with CLI and HFI protocols and the 1RM performance was significantly lower after HFI warm-up protocol than any other protocol. The 1RM scores were higher after LFI warm-up protocol compared with MFI but differences were not significant. The results of the present study indicated that HR and RPE values determin ed after HFI warm-up were approximately 30% higher than those after CLI, LFI and M FI protocols (Table 1). The physiological stress that associated with the workload of HFI warm-up protocol seems to lead to muscle fatigue, which may explain the decrease in 1RM performance (Barroso et al., 2013;Bishop, 2003b). Importance of increasing the body temperature before a short-term activity like 1RM test has been established by Bishop (2003b) and by increasing body temperature appropriately , harmful effects of excess fatigue c be avoided. 5-10 min. duration warm-up was recommended by testing guidelines before strength testing (Baechle and Earle, 2008;Pescatello, 2014). However, aforementioned guidelines have little scientific support and suggested warm-up durations seem to be shorter than necessary to induce performance enhancement in strength tests. Several studies demonstrated that a significant increase in muscle temperature has occurred only after 15-20 min. of aerobic activity (Price and Campbell, 1997;Stewart et al., 2003). Better performance for 1RM Leg press was demonstrated while using 15 min. duration warm-up compared with 10 min (Barroso et al., 2013). 15 min. was selected as the duration of each warm-up conditions in the current study according to the latest literature. Intensity of the warm-up is an important determinant, which affects the test result and should be organized carefully. According to the results of the current study , combining warm-up with long duration (15 min) and high frequency of supramaximal intermittent sprints (10 sprints) may impair 1RM leg press performance because of the accumulated effect of muscle fatigue. In the light of the recent findings, it is conceivable to say that the warm-up protocol that lasts 15 min. should be combined with low (≤ 40% of VO 2Max ) exercise intensities with two to five supramaximal (about 170% of VO 2Max workload) sprints that last 15 seconds to avoid fatigue development and to employ lower body 1RM strength testing with optimum precision. In the current study, the LFI warm-up protocol produced better results compared to other protocols. Furthermore, for LFI protocol, physiological stress parameters (HR, RPE, and Lactate) were the second lowest among four protocols. Although CLI induced lower physiological stress than LFI, it can be speculated that because of the lack of the intermittent high-intensity efforts in the CLI protocol, exercise impulse was insufficient to trigger appropriate muscle temperature and 1RM performance was lower compared with LFI. The strength performance difference after using LFI warm-up protocol may be considered small (approx. 4%). However; performance improvements about 4% were reported in bench press 1RM values after a periodized 12-week training cycle in eleven elite male kayakers (Garcia-Pallares et al., 2010). Besides, the improvement about 3-4% is similar to those observed in response to a long-term strength training in strength-trained individuals (Kraemer, 1997). Strength performance testing allows the trainer to monitor the progression of the on-going training plan. Therefore, it is vital to detect true 1RM value that reflects the maximal possible strength of the athlete. It can be concluded that the warm-up has an important effect on 1RM leg press performance, according to the results of the present study . To obtain the most precise 1RM leg press result, the general warm-up before the test should contain 10 min of low intensity (40% VO 2Max ) exercise and two supramaximal sprints, which last 15 seconds, must be added at the last 5 min. of the warm-up. Performance improvement about 4% after LFI protocol is similar to a progression of highly trained individuals' strength values over a long-term strength training. Thus, the results of the present study are important for both practical and research environments. These results must be viewed with caution because only collegiate male rowers were studied. Whether the trend of the 1RM testing results is similar after the similar warm-up protocols in different athletic populations is a good perspective to future research. On the other hand, using rowing ergometer as a warm-up device before 1RM leg press testing can be recommended since rowing ergometers are easy to find and used regularly in most of the gyms. However, if coach or personal trainer decides to use rowing ergometer for warm-up before 1RM leg press test caution must be given to the rowing technique of an individual ; because the differences in the application of rowing technique may affect the physiological variables. Practitioners must keep in mind that these suggestions are limited to 1RM lower body maximum strength tests and should not be applied to other strength tests such as strength endurance or power.
2018-12-13T10:33:38.490Z
2018-06-22T00:00:00.000
{ "year": 2018, "sha1": "024560825fa9d87cf8e8a2434a8e2820a24b1d64", "oa_license": null, "oa_url": "https://doi.org/10.14486/intjscs731", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "024560825fa9d87cf8e8a2434a8e2820a24b1d64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Mathematics" ] }
234063167
pes2o/s2orc
v3-fos-license
Behavior Change and Skills Retention With an Action Record After Ultrasonography Training In simulation training, behavior change (Kirkpatrick’s level 3) is more important than learning improvement (Kirkpatrick’s level 2). However, few studies have evaluated behavior change because it is dicult to assess objectively. Skills retention is another challenge. We evaluated whether keeping a record of the number of ultrasound (US) examinations performed after a simulation course led to positive behavior changes and improved skills retention. diagnosis, and contributes to reducing complications. 1, 2 POCUS is also part of under-graduate and postgraduate medical education. [3][4][5] Numerous POCUS simulation courses with variable content and duration are available. 6,7 Generally, simulation courses such as the Advanced Cardiovascular Life Support course have been shown to increase knowledge or skills. 8 POCUS simulation courses for medical students and doctors have shown similar results. 1,5,7,9 Few studies have evaluated the educational effect of POCUS courses for nurse practitioners; however, some studies reported that this training improved skills. [10][11][12] In evaluating the educational effect of training, the four levels of learning evaluation advocated by Kirkpatrick and Kirkpatrick showed that "What did participants apply in practice?" (level 3; behavior) is more important than "What have participants learned?" (level 2; learning) ( Fig. 1). 13 Few studies have shown that simulation training can change both behavior and the learning level, especially regarding POCUS. 14 In addition, simulation courses can increase knowledge and skills immediately after the course; however, these gains tend to decline a few months after the training. 4,8,9,15,16 To resolve this problem, follow-up lectures or hands-on training after the initial course may be effective for maintaining knowledge. [17][18][19] POCUS simulation courses are similar; however it is unclear whether both knowledge and clinical skills (e.g., image acquisition) can be maintained. 4,5,9,15,16,20,21 In this study, we held a 2-day POCUS simulation course for Japanese nurse practitioners (JNP) and JNP trainees. These practitioners were instructed to record the number of ultrasound (US) examinations they performed before and after the course. This study had two aims. First, we aimed to determine whether keeping a record of the number of US examinations performed after the simulation course improved both knowledge and skills (level 2; learning) and led to behavior change (level 3; behavior). Second, we aimed to determine whether keeping a record of the number of US examinations helped to maintain US knowledge and skills. Behavior change was evaluated by comparing the number of US exams performed before and after the course. To evaluate maintenance of the learning level, we evaluated image interpretation skills, image acquisition skills, and con dence in performing POCUS before and immediately after the course, and again 4 months after the course. Study design and setting We held one POCUS training program in 2018 and one in 2019. The program involved four parts: 1) recording the number of US exams performed during the 3 months before participating in the POCUS course, 2) participating in the 2-day POCUS course, 3) recording the number of US exams performed during the 3 months after participating in the course, and 4) follow-up evaluation 4 months after the course. All participants recorded the number of cardiac US, lung US, deep vein thrombosis (DVT) US for the lower extremities, and abdominal US they performed in the 3 months before the course. We chose these four US examinations because the 2-day POCUS course focused on these examinations A standardized POCUS course with a proven educational effect was adopted for our 2-day POCUS course. 7 The educational effects of this course for medical students and doctors have been demonstrated; however, this was the rst such training for JNPs. 7 Participants' image interpretation skills, image acquisition skills, and con dence in performing POCUS were evaluated before and after the course. Image interpretation skills were evaluated by a written examination using POCUS case study videos and multiple-choice questions. 7 Image acquisition skills were evaluated in hands-on situations by the POCUS instructors using live models and evaluation sheets. Con dence was evaluated by a self-evaluation survey with a ve-point Likert scale using previously validated multiple-choice questions and a selfevaluation survey. 7 Participants then recorded the number of US examinations they performed for the 3 months after the course. Four months after completing the course, participants completed a follow-up test covering image interpretation skills, image acquisition skills, and con dence. This test was the same as that performed immediately after the course ( Figure 2). There were no interventions, including didactic lectures, between the end of the course and the 4-month follow-up test. Nine instructors were involved in the course and evaluated participants. All instructors were certi ed POCUS instructors. 7 Before the course, the instructors received a lecture presenting the evaluation method and online discussions to standardize the evaluation method. This study was approved by the Institutional Review Board of the Tokyo Bay Urayasu Ichikawa Medical Center. Before participation, participants were informed that the results of this study would not affect evaluation of their work or future training. Written informed consent was obtained from all participants. Participants Japan has an original nurse practitioner system (JNP system), which began in 2008 and was partially revised in 2015. 22 There are several certi ed JNP training programs in Japan. In the present study, JNPs and JNP trainees were recruited through the JNP training program delivered by the Japan Association for the Development of Community Medicine from 2018 to 2019. During the study period, JNPs worked in hospitals or clinics, and JNP trainees worked in hospitals and were engaged in on-the-job training under the supervision of attending doctors; therefore, all participants could access portable US machines and perform US examinations. Data collection All participants recorded the number of US exams they performed on their own for 3 months before participating in the 2-day POCUS course. This information was recorded on a Microsoft Excel spreadsheet distributed by the study secretariat. These sheets were collected before participants began the POCUS course. The number of US examinations performed during the 3 months was categorized: category 1: 0 cases, category 2: 1-9 cases, category 3: 10-29 cases, category 4: 30-49 cases, category 5: 50-99 cases, and category 6: ≥100 cases. Image interpretation skills, image acquisition skills, and con dence in performing POCUS were evaluated pre-and immediately post-course. Participants recorded the number of US examinations they performed for 3 months after the course, using the same system as for the 3-month period before the course. These records were collected by the study secretariat. Four months after the course, participants completed follow-up testing to evaluate their image interpretation skills, image acquisition skills, and con dence. This test was the same as that used for the pre-and immediate post-course testing. Statistical analysis Comparisons of the difference between the US examination categories before and after the course were analyzed using Wilcoxon's signed-rank test. Written examinations, evaluation sheets, and self-evaluation survey scores were analyzed with the Friedman test with Bonferroni adjustment. Data analyses were performed using EZR statistical software (version 1.52), which is a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria). 23 Participants Thirty-ve participants completed the POCUS training program in 2018 or 2019. Two participants were excluded because they could not complete the program. Nine JNPs and 24 JNP trainees from 21 facilities completed the program. These facilities were geographically distributed across Japan from Hokkaido in the north to Nagasaki prefecture in the south. Some participants were from the same facilities, and most (94%) worked in community hospitals; no participants worked in university hospitals. The mean number of post-graduate years was 13.2 years (range: 6-22 years). All participants were novice POCUS trainees (Table 1). Image interpretation skills, image acquisition skills, and con dence scores The mean scores for the image interpretation skills test pre-course, and immediately post-course, and at the 4-month follow-up evaluation were 37.1 (SD: 16.0), 72.6 (SD 11.1), and 71.8 (SD 9.9) (out of 100 points), respectively. Both the immediate post-course test and the 4-month follow-up test scores were statistically signi cantly higher than the pre-course scores (P < 0.001). However, the difference between the immediate post-course and the 4-month follow-up test scores was not statistically signi cant (P = 1.00). The mean scores for the image acquisition skills test pre-course, immediate post-course, and at the 4-month follow-up were 13.7 (SD: 10.7), 53.6 (SD: 8.9), and 52.9 (SD: 9.3) (out of 71 points), respectively. Both the immediate post-course and 4-month follow-up test scores were statistically signi cantly higher than the pre-course test scores (P < 0.001). The difference between the immediate post-course and the 4month follow-up test scores was not statistically signi cant (P = 1.00). The mean scores for con dence pre-course, immediate post-course, and at the 4-month follow-up survey were 15.8 (SD: 3.6), 35.7 (SD: 10.5), and 33.0 (SD: 11.6) (out of 70 points), respectively. Both the immediate post-course survey and 4month follow-up test scores were statistically signi cantly higher than the pre-course survey scores (P < 0.001). The difference between the immediate post-course and the 4-month follow-up test scores was not statistically signi cant (P = 0.34) (Figure 4). Discussion This study aimed to determine whether recording the number of US examinations performed after taking a POCUS simulation course led to a behavior change, and whether keeping a record of the number of US examinations performed maintained US knowledge and skills. Our results showed that keeping a record signi cantly increased the number of US examinations performed. In addition, keeping a record after the simulation training led to a behavior change in the eld of simulation education. Keeping a record also contributed to maintaining POCUS knowledge, skills, and con dence. Our study suggests that keeping a record may be useful to improve skills retention in the eld of simulation education. In educational methods, including in simulation training, it is important and most effective to cause both a reaction or learning improvement (Kirkpatrick's levels 1 and 2, respectively), and behavior change or improvement (Kirkpatrick's levels 3 and 4, respectively). 13 However, it is often di cult to evaluate levels 3 and 4 because this evaluation is time consuming and requires effort and cost to train evaluators and prepare tools and facilities. Therefore, few studies have evaluated behavior change, and effective methods to change behavior have not been established in the eld of simulation training. 14,24,25 In our study, recording the number of clinical US examinations increased the number of these examinations that participants performed after the simulation course. Our results also showed that keeping a record after the simulation course led to a behavior change. This study showed that the quality of the examinations was maintained after the course. The 4-month follow-up test results showed that the image interpretation skills, image acquisition skills, and con dence scores were statistically signi cantly improved compared with the pre-course test scores, and that these scores did not decline compared with the immediate post-course test scores. The problem of skills retention is an most important problem in the eld of simulation training. 4,5,16,17 Knowledge, skills, and con dence decline in a few months to 1 year after a simulation course with no interventions. 4,5,9,16,19 Several methods have been proposed to help participants retain knowledge and skills; for example, providing didactic or online lectures, and holding hands-on training sessions or simulation training courses regularly or several months after the course. 15,[18][19][20][21] The method that we used in this study (keeping a record) was useful to maintain knowledge, skills, and con dence 4 months after the simulation course. This method involves less effort and cost than conventional methods and is feasible and can be implemented at most facilities. This study had several limitations. First, we recruited only JNPs; however, participants were from 21 facilities in numerous regions across Japan. The number of post-graduate years also varied; repeating this research with attending doctors, residents, and medical students is needed to clarify the usefulness of this method for these cohorts. Second, this study involved a follow-up test 4 months after the course. Participants were aware of this follow-up test in advance, which might have in uenced their behavior. However, all participants were informed that the results of this study would not affect their future work or training. Therefore, the impact of the follow-up test was not considered large. Third, the number of US examinations was self-reported. Additionally, nine instructors participated as evaluators. All instructors were certi ed and trained regarding how to evaluate participants before this study. However, it cannot be denied that there might have been measurement error. This study was not a crossover study, and we did not compare study participants with a group who did not keep a record. However, research has shown that skills and knowledge decrease after a simulation course without intervention. Therefore, our method appeared to be effective to improve the educational effectiveness of simulation courses. Although there were several limitations, our study indicated that keeping a record after taking a simulation course can lead to behavior change. This method also effectively maintained knowledge, skills, and con dence and is inexpensive, with good feasibility. This method is therefore useful to induce behavior change (Kirkpatrick's level 3) and improve skills retention in the eld of simulation training. Conclusion Keeping a record after a POCUS simulation course increases the number of clinical US examinations performed after the course. Image interpretation skills, image acquisition skills, and con dence also improve and are maintained. This method not only improves the learning effect, but also leads to changes in behavior (Kirkpatrick's level 2 and 3, respectively) in the eld of simulation training. Skills retention is also improved. The method is inexpensive and feasible. Combining simulation training with keeping a record may improve the educational effect in the eld of simulation training. Declarations Ethics approval and consent to participate: The study protocol was approved by the Institutional Review Board of the Tokyo Bay Urayasu Ichikawa Medical Center. Written informed consent was obtained from all study subjects before participation. All methods were carried out in accordance with regulations of Institutional Review Board of the Tokyo Bay Urayasu Ichikawa Medical Center. Consent for publication: Not applicable. Availability of data and materials: The datasets generated and analysed during the current study are available from the corresponding author on reasonable request Competing interests: The authors declare that they have no competing interests. Authors' contributions: All authors were involved in study design, data interpretation, and manuscript preparation. TY was the principal investigator and was responsible for regulatory compliance, participant recruitment, data collection, data analysis, and manuscript preparation. JE, HF, KE, YK, and YT contributed to the study coordination and data collection, entry, and analysis. All authors read and approved the nal manuscript.
2021-05-10T00:02:57.643Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "c8b570c39c039dea9321574f7bf2acc731095312", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-154531/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "fb9393eb8419e1d71ef08ae805752e99f00c0522", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
229685627
pes2o/s2orc
v3-fos-license
Novel Small Molecular Compound AE-848 Potently Induces Human Multiple Myeloma Cell Apoptosis by Modulating the NF-κB and PI3K/Akt/mTOR Signaling Pathways Background We aimed to investigate the anti-multiple myeloma (MM) activity of the new small molecular compound AE-848 (5-bromo-2-hydroxyisophthalaldehyde bis[(1-methyl-1H-benzimidazol-2-yl)hydrazone]) and its underlying anti-MM mechanism. Methods Cell viability and apoptosis were detected and quantified by using MTT and flow cytometry, respectively. JC-1 dye-related techniques were used to assess mitochondrial membrane potential (MMP). Western blotting was applied to detect the expression of NF-κB and PI3K/Akt/mTOR pathway-associated proteins. The in vivo activity of AE-848 against MM was evaluated in a MM mouse model. Results Application of AE-848 into the in vitro cell culture system significantly reduced the viability and induced apoptosis of the MM cell lines, RPMI-8226 and U266, in a dose- and time-dependent manner, respectively. JC-1 dye and Western blotting analysis revealed that AE-848 induced the cleavage of caspase-8, caspase-3, and poly ADP-ribose polymerase (PARP), resulting in loss of mitochondrial membrane potential (MMP). Both the NF-κB and PI3K/AKT/mTOR signaling pathways were involved in AE-848-induced apoptosis of U266 and RPMI8226 cells. Moreover, AE-848 leads to cell cycle arrest of MM cells. Its anti-MM efficacy was further confirmed in a xenograft model of MM. AE-848 administration significantly inhibited MM tumor progression and prolonged the survival of MM-bearing mice. More importantly, our results demonstrated that AE-848 markedly induced primary MM cell apoptosis. Conclusion Our results for the first time showed that the small compound AE-848 had potent in vitro and in vivo anti-myeloma activity, indicating that AE-848 may have great potential to be developed as a drug for MM treatment. Introduction Multiple myeloma (MM) is a malignant tumor characterized by abnormal hyperplasia of plasma cells in the bone marrow. It accounts for approximately 10% of hematopoietic tumors worldwide, and its incidence is increasing globally. 1 The widespread filling of malignant plasma cells in bone marrow leads to multiple osteolytic lesions, repeated infections, anemia, hypercalcemia, hyperviscosity syndrome and kidney damage, which can eventually result in adverse consequences. 2 The pathogenesis of MM is extremely complex, involving a variety of cytokines, adhesion molecules, signal transduction pathways, cellular genetic abnormalities, and the bone marrow microenvironment. 3 Among these, nuclear factor κB (NF-κB) is a key factor that selectively binds to the enhancer of B cell kappa-light chain to regulate the expression of many genes. Hyperactivated NF-κB signaling thus serves as an important prognostic biomarker and therapeutic target for MM. 4,5 The PI3K/Akt/mTOR signaling pathway controls a number of biological processes critical to tumorigenesis, including apoptosis, transcription, translation, and cell cycle. 6 A growing number of studies have shown that inhibition of the PI3K/Akt/mTOR signaling pathway triggers apoptosis in MM cells. 7,8 In the recent years advances in treatments, including immunomodulatory drugs (eg, thalidomide and lenalidomide), 9 proteasome inhibitors (eg, bortezomib and carfilzomib), 10 and autologous transplantation, 11 and so on, has changed outcome of MM patients tremendously. However, MM remains incurable, and there is a demand for novel therapeutic compounds. In the present study, we evaluated anti-MM activity of the small molecule 5-bromo-2-hydroxyisophthalaldehyde bis[(1-methyl-1Hbenzimidazol-2-yl)hydrazone] (AE-848), which was cytotoxic to both MM-derived cell lines and primary MM cells. And administration of AE-848 significantly inhibited myeloma growth and prolonged survival of myeloma bearing mice. These findings together suggest a therapeutic potential of AE-848 for the treatment of MM. Materials and Methods General 1 H NMR spectra were recorded on a Bruker DRX spectrometer at 600 MHz using DMSO-d6 as the solvent. Melting points (Mp) were determined using a Stuart melting point apparatus. All reagents and solvents were purchased from commercial sources and used without further purification. Synthesis Synthesis of 2-Hydrazino-1-Methyl-1H-Benzimidazole (1) Ethanol (10 mL) was added to 2-chloro-1-methyl-1Hbenzimidazole (1 g, 6.4 mmol), followed by addition of hydrazine (1 mL, 32 mmol) under stirring at room temperature. The solution was heated to 70 °C and allowed to stirred for 1 h, and then the reaction mixture was cooled to room temperature. After filtration and elimination of the solvent, the product was obtained as a white solid ([2-hydrazino-1-methyl-1H-benzimidazole, Mp: 148-150 °C. 1 Synthesis of 5-Bromo-2-Hydroxy-1,3-Benzenedicarboxaldehyde (2) Trifluoroacetic acid (15 mL) was added to 4-bromophenol, followed by the addition of hexamethylenetetramine (1.62 g, 11.5 mmol) at room temperature under stirring. The solution was heated to 120 °C and refluxed for 12 h under argon. HCl (4N, 30 mL) was added to the reaction mixture, which was then stirred for another 1 h. The reaction mixture was cooled to room temperature and the crude product was obtained by filtration and elimination of the solvent. The pure product was obtained by column separation (eluent: petroleum ether/ ethyl acetate = 3:1) as a yellow solid (5- Synthesis of AE-848 (3) Ethanol (10 mL) was added to a mixture of 2-hydrazino-1-methyl-1H-benzimidazole (1 mmol) and 5-bromo-2-hydroxy-1,3-benzenedicarboxaldehyde (1 mmol), and 2 drops of acetic acid were added. The precipitate formed after stirring for 30 min. The reaction mixture was refluxed for 1-2 h and filtrated when it became hot. The crude product was obtained by washing with hot methanol, and the pure product was purified by crystallizing with tert-butyl alcohol and DMF Cell Culture Human MM cell lines (U266 and RPMI8226) were obtained from American Type Culture Collection (ATCC, Manassas, VA, USA) and cultured in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS) at 37 °C in a humidified incubator with 95% air and 5% CO 2 . Cells in the logarithmic growth phase were selected for subsequent experiments. Primary MM Cells The human sample study was approved by the Human Ethics Research Committee of the Second Hospital of Shandong University (approval no. KYLL-2019(KJ) P-0205). Bone marrow specimens were obtained from newly diagnosed or relapsed/refractory patients with MM. Bone marrow mononuclear cells (BMMCs) by Ficoll-Hypaque (Tianjin HY Bioscience Co., Ltd., Tianjin, China) were isolated via density gradient centrifugation and further cultured in RPMI-1640 medium with 10% FBS. Blood samples from healthy volunteers were collected and treated with Ficoll-Hypaque density gradient centrifugation to isolate peripheral blood mononuclear cells (PBMCs) following the reagent instructions. Each patient and healthy volunteer provided informed consent. The study was approved by the Human Ethics Research Committee of the Second Hospital of Shandong University in accordance with the Declaration of Helsinki. Cell Viability Analysis MTT assay was conducted to detect cell viability. 12 U266 and RPMI8226 cells were seeded at 2×10 4 cells/well and treated with 2.5, 5, 10, or 20 µM AE-848 for 12 h, or with 5 µM AE-848 for 24 and 48 h, respectively. The cells were incubated with MTT at room temperature for 4 h. Subsequently, an appropriate amount of DMSO was then added and oscillated on the oscillator for 15 min. The optical density at 570 nm was measured with a multifunctional microplate reader (Synergy Neo, BioTek, Winooski, VT, USA), and the values were expressed as absorbance. Apoptosis Analysis U266 and RPMI8226 cells (2×10 5 cells/well) were incubated with AE-848 (2.5, 5, and 10 µM) for 12 h or incubated with 5 µM AE-848 for 24 h and 48 h. The cells were then incubated with Annexin V in the dark for 15 min. Subsequently, 2 µL PI was added to the cell suspension, and the apoptosis rate was measured using a flow cytometer (BD Biosciences, San Jose, CA, USA). For eight primary MM samples, the cells (2×10 5 cells/ well) were inoculated into 24-well plates with 5 μM AE-848 for 12 h. The cultured cells were then collected, washed, and stained with anti-human CD38-PE monoclonal antibody in PBS for 20 min at room temperature in the dark. Afterwards, the cells were washed with PBS twice and resuspended in 100 μL binding buffer containing 2 μL Annexin-V. The percentage of apoptotic cells was measured using FACS. Analysis of MMP JC-1 dye was used to detect MMPs. 13 The images were taken with an inverted fluorescent microscope. Briefly, U266 and RPMI8226 cells (2×10 5 cells/well) were treated with AE-848 (5µM) for 12 h. After incubation, the MM cells were first rinsed with PBS, then added to the JC-1 dye working solution and shaken evenly. Next, the cells were incubated at room temperature for 15 min, washed with JC-1 staining buffer twice, and analyzed by flow cytometry. Cell Cycle Analysis Cell cycle distribution was analyzed using flow cytometric assay, as previously described. 14 In brief, U266 and RPMI8226 cells treated with AE-848 (2.5, 5, and 10 µM) were harvested, washed twice with PBS, and fixed overnight with 75% ethanol. U266 and RPMI8226 cells were incubated with 500 μL PI/RNase staining buffer at 37 °C for 30 min, followed by FACS analyses. Mouse Xenograft Model The animal study was approved by the Second Hospital of Shandong University of Medicine Institutional Animal Care & Use Committee (approval no. KYLL-2018(LW) 019). The study obeyed the principles of the ethical guidelines outlined by the International Council for Laboratory Animal Science (ICLAS). 15 Female NOD/SCID mice (6-8 weeks, weighing 18-20 g) were purchased from the Beijing Vital River Laboratory Animal Technology Co., Ltd (Beijing, China) and raised under a specific-pathogenfree (SPF) environment. Mice were subcutaneously injected with 1×10 7 RPMI8226 cells suspended in 100 μL normal saline (NS) in the right foreleg. Based on our pilot evaluations, we selected a concentration of 12.5 mg/ kg via intraperitoneal injection for subsequent in vivo experiments. Approximately 3 weeks after RPMI8226 cell injection, when the tumor reached a size of approximately 200 mm 3 , 12 immunized mice were randomly divided into two groups: the control (n=6) and treatment groups (n=6). The mice in the control group were intraperitoneally injected with NS containing DMSO and Cremophor EL for 14 consecutive days, while the mice in the treatment group were additionally administered AE-848 (12.5 mg/ kg). When the mice reached the endpoint of the observations, which was defined as when the tumor size exceeded 2.0 cm in any direction or when a mouse was unable to creep for food and/or water, the mice were humanely euthanized by cervical dislocation. Changes in mice weight and tumor volume were monitored in the control and treatment groups every 3 days. The administration of compounds was carried out as a blind experiment, and all information about the expected outputs and the nature of compounds used were kept from the animal technicians. Tumors were measured using calipers, and volumes were calculated using the formula V = long diameter × (short diameter) 2 × π/6. 16 Statistical Analysis The data were analyzed using SPSS 19.0, and one-way analysis of variance (ANOVA) and Bonferroni's test were used to compare differences among different treatment groups. Survival rate was analyzed by Kaplan-Meier analysis. A P-value of < 0.05 was considered statistically significant. Inhibitory Effect of AE-848 on U266 and RPMI8226 Cell Viability AE-848 is a synthesized small-molecule compound ( Figure 2A). To investigate whether AE-848 affects the viability of U266 and RPMI8226 cells, we conducted an MTT assay. As presented in Figure 2B and D, AE-848 suppressed the viability of U266 and RPMI8226 cells in a concentration-and time-dependent manner. More specifically, after treatment with 2.5, 5, 10, and 20 μM AE-848 for 12 h, the inhibition rates of U266 and RPMI8226 increased gradually. The IC50 values of U266 and RPMI8336 cells were 4.3 ± 2.5 μM and 5.1 ± 3.5 μM, respectively ( Figure 2B and D). As shown in Figure 2C and E, after exposure to 5 μM AE-848 for 12, 24, submit your manuscript | www.dovepress.com AE-848 Has Lower Cytotoxicity Against Peripheral Blood Mononuclear Cells To evaluate the toxicity of AE-848 to normal cells, we collected PBMCs from normal human peripheral blood, and compared apoptosis between PBMCs and U266 cells incubated with AE-848 for 12 h. Annexin V/PI staining results showed that when incubated with 5 μM AE-848 for 12 h, AE-848 exhibited negligible toxicity to normal PBMCs, while a great killing effect was observed on U266 cells (13.3 ± 1.1%, PBMCs vs 68.0 ± 4.3%, U266; P < 0.001) ( Figure 2F and G). Induction of Cell Apoptosis by AE-848 U266 and RPMI8226 cells were stained with Annexin-V FITC and PI to test apoptosis by flow cytometry. The total number of AV + PI − and AV + PI + cells were counted as apoptotic cells. As shown in Figure 3A, the induction of apoptosis by AE-848 increased with its concentration, and the proportion of Annexin V-positive cells increased. More specifically, compared to the control group (11.4 ± 4.5%), the apoptosis of U266 cells increased to 36.3 ± 5.4% at 2.5 μM, 62.6 ± 4.6% at 5 μM, and 85.0 ± 2.5% at 10 μM. Meanwhile, the apoptotic RPMI8226 cells (baseline 6.6 ± 1.7%) increased to 13.6 ± 4.6% at 2.5 μM, 35.2 ± 5.0% at 5 μM, and 66.4 ± 2.3% at 10 μM ( Figure 3B). A similar pattern was observed in Figure 3C and D, in which markedly increased apoptosis occurred after incubation with 5 μM AE-848 for 24 or 48 h. Apoptosis was induced in both U266 and RPMI8226 cells in a dose-and AE-848 Induced Loss of MMP in MM Cells JC-1 dye-related techniques were used to assess MMP in MM cells. The relative proportion of red and green fluorescence is commonly used to measure the degree of mitochondrial depolarization. A decrease in the red/green ratio is indicative of apoptosis. When JC-1 staining was applied to investigate the possible involvement of the mitochondrial apoptosis pathway, we found that after exposure to 5 µM AE-848 for 12 h, a dramatic drop in MMP was observed in both U266 and RPMI8226 ( Figure 5A and B) (U266, treated vs untreated, 87.0 ± 3.1% vs 8.0 ± 3.1%, P < 0.001; RPMI8226, treated vs untreated, 82.5 ± 2.7% vs.15.7 ± 3.5%, P < 0.001). In addition, a stronger green fluorescence was observed in AE-848-treated groups, in line with our flow cytometry data ( Figure 5C). These results suggest that the mitochondrial-related intrinsic apoptosis pathway is involved in AE-848-induced apoptosis. AE-848 Induced Cleavage of Caspase 8, Caspase 3, and Cleaved PARP in MM Cells Western blotting analysis was performed to study the underlying mechanisms of the related proteins associated with mitochondria-mediated intrinsic apoptosis in U266 cells. This included pro-caspase-3 and pro-caspase-8, two key proteases in the apoptosis pathway. Processing of both AE-848 Induces Apoptosis in MM Cells in a Caspase-Dependent Manner In this study, we observed caspase activation in MM cells treated with AE-848. To further reveal the importance of caspase activation for AE-848-induced apoptosis, we applied Z-VAD-FMK (zVAD), a pan-caspase inhibitor, to cell culture at 20 μM for 1 h, followed by the addition of AE-848 (5 μM). Interestingly, flow cytometry results showed that zVAD significantly attenuated AE-848induced apoptosis (24.0 ± 2.1% vs 66.7 ± 3.0% in U266 cells; 14.0 ± 2.0% vs 39.3 ± 3.1%, RPMI8226 cells; P < 0.001; Figure 6C and D). Consistently, Western blotting revealed that zVAD markedly reduced the cleavage of caspase-3 and PARP ( Figure 6E). Taken together, the extrinsic cell apoptosis pathway is involved in AE-848induced apoptosis in a caspase-dependent manner. AE-848 Induced Cell Cycle Arrest in MM Cells We next conducted flow cytometry assays to test whether AE-848 could cause cell cycle arrest. As shown in Figure 8A-C, after AE-848 treatment for 12 h, the percentage of cells in G2/ M phase increased from 23.5% to 29.3% for U266 ( Figure 8B) and from 46.6% to 62.0% for RPMI8226 ( Figure 8C). AE-848 Suppressed Tumor Growth and Prolonged Overall Survival of MM-Bearing Mice To examine the therapeutic effects of AE-848 on MM in vivo, MM cell-bearing mice were treated with AE-848 or NS containing DMSO and Cremophor EL by intraperitoneal injection every day for 14 days. As shown in Figure 9A-C, AE-848 administration inhibited tumor growth, in terms of tumor weight and tumor volume. Kaplan-Meier curves ( Figure 9D) showed that AE-848 treatment significantly prolonged the survival time of MM cell-bearing mice (23.5 vs 17.0 days, P < 0.001). Taken together, AE-848 selectively inhibited tumor growth and prolonged survival of MM cell-bearing mice in vivo. Discussion Multiple myeloma (MM) is a malignant plasma cell disease, often accompanied by multiple osteolytic lesions, DovePress OncoTargets and Therapy 2020:13 hypercalcemia, anemia, and kidney damage. 17 Although proteasome inhibitors and immunoregulatory drugs significantly improve the treatment efficiency and prognosis of MM patients, patients eventually relapse; MM therefore remains an incurable malignancy. Additionally, side effects created by the current regimens affect the quality of life of patients. 1,18 Thus, the development of new therapeutic drugs with high efficacy and minimal side effects is urgently required. By screening a small molecular library (Specs_SC), we identified twenty compounds and found that only AE-848 significantly induced apoptosis in MM cells but not in normal cells. Apoptosis is a kind of programmed death, which is crucial for cells to maintain the balance of the body. 19 As cancer cells have abnormal proliferation and survival characteristics, inducing apoptosis is one of the main mechanisms for many anti-tumor drugs. 20 A novel arylguanidino compound AE-848 was synthesized in our study, and our results demonstrate its potent anti-MM efficacy. We used an MTT assay to evaluate the viability of MM cells. AE-848 inhibited U266 and RPMI8226 cell viability in a dose-and time-dependent manner. Moreover, the same concentration of AE-848 and treatment times had negligible toxicity in normal PBMCs, providing OncoTargets and Therapy 2020:13 submit your manuscript | www.dovepress.com DovePress a therapeutic window for AE-848 in MM treatment. Moreover, exposure to AE-848 remarkably induced apoptosis of U266, RPMI8226, and primary MM cells. Consistently, in vivo experiments using MM cell-bearing mice showed that tumor weight and tumor volume were significantly reduced following AE-848 treatment. Mitochondria-mediated apoptosis is considered an important apoptotic pathway. The loss of MMP leads to mitochondrial depolarization, which in turn promotes the release of apoptotic factors and ultimately triggers cell apoptosis. 21 In our study, treatment with AE-848 sharply decreased the MMP in U266 and RPMI8266 cells, which was consistent with our immunofluorescence results. Mitochondria-mediated apoptosis involves many factors, such as caspase-3, caspase-8, and PARP. 22 Caspase-8 (initiator caspase) and caspase-3 (executor caspase) are core components of apoptosis resulting from exogenous or endogenous apoptotic signals. 23 Meanwhile, PARP is the main shear substrate of caspase-3, which is considered an important indicator of caspase-3 activation. Western blotting results showed that treatment with AE-848 induced the cleavage of caspase-3, caspase-8, and PARP. When zVAD was added, the apoptosis rate of U266 cells was significantly reduced, coupled with diminished cleavage of caspase-3. Uncontrolled cell proliferation is one of the most important hallmarks of tumor cells. Moreover, an aberrant cell cycle accounts for dysregulated cell growth, which ultimately leads to tumor formation. 24 In this study, we demonstrated that AE-848 inhibited the growth of U266 As a transcription factor, NF-κB is composed of dimeric complexes of p50 (NF-κB1) or p52 (NF-κB2), usually associated with members of the Rel family (P65, c-Rel, Rel-B). NF-κB plays an important role in regulating the inflammatory response and cellular proliferation, and is generally inactive in normal cells. 25 IκB, an inhibitor of NF-κB, prevents NF-κB from transferring into the nucleus. Phosphorylated IκB releases NF-κB, which then enters the nucleus and triggers the activation of downstream genes and participates in a series of biological processes, including MM. 26 In the present study, the cytoplasmic and nuclear protein expression of P65, NF-κB2 P100/P52, NF-κB1 P105/P50, Rel-B, and c-Rel in U266 and RPMI8226 cells were significantly inhibited by AE-848, indicating that AE-848 inhibits the NF-κB signaling pathway in MM cells. PI3K/Akt/mammalian target of rapamycin (mTOR) is an important intracellular signaling pathway directly related to cell dormancy, proliferation, and longevity. 22 Increasing numbers of studies have suggested that inhibition of PI3K/ Akt/mTOR is crucial for the anti-proliferation effect of MM cells. 27 Akt is the downstream target of PI3K. Upon PI3K activation, Akt is phosphorylated and activated to localize in the plasma membrane. Activated Akt regulates cell function by phosphorylating downstream factors, including various enzymes, kinases, and transcription factors. 28,29 mTOR is an important downstream target of PI3K/Akt, and participates in the regulation of tumor cell proliferation. 30 It has been shown that inhibition of PI3K/Akt/mTOR signaling pathway prolongs the life cycle and improves the quality of life of MM patients. 31 In our study, a decreased protein expression of PI3K, Akt, and mTOR was observed in MM cells, which was consistent with the results of a previous study. 32 Here, in our study, a decreased protein expression of PI3K, Akt and mTOR were observed in MM cells treated by AE-848, suggesting that PI3K/Akt/mTOR pathway is involved in AE-848 induced MM apoptosis. Conclusion AE-848 induces apoptosis of MM cells in vitro and in vivo, and exerts this effect on primary MM cells through its inhibitory effects on the NF-κB and PI3K/ OncoTargets and Therapy 2020:13 submit your manuscript | www.dovepress.com DovePress Akt/mTOR signaling pathways. Given its minimal toxicity to normal blood cells, AE-848 may thus be a promising candidate drug to develop for the treatment of MM.
2020-12-24T09:12:29.458Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "f5b1e6159e6bd4b2dc71bd6c22e13ac903737f7e", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=65025", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75736989d0d657054e4746c4cc564320d31ae39a", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
247728213
pes2o/s2orc
v3-fos-license
Diffuse Ultrasonic Wave-Based Damage Detection of Railway Tracks Using PZT/FBG Hybrid Sensing System Damage detection of railway tracks is vital to ensure normal operation and safety of the rail transit system. Piezoelectric sensors, which are widely utilized to receive ultrasonic wave, may be disturbed in the railway system due to strong electromagnetic interference (EMI). In this work, a hybrid ultrasonic sensing system is proposed and validated by utilizing a lead-zirconate-titanate (PZT) actuator and a fiber Bragg grating (FBG) sensor to evaluate damage conditions of the railway tracks. The conventional ultrasonic guided wave-based method utilizing direct wave to detect damages is limited by the complex data analysis procedure and low sensitivity to incipient damage. Diffuse ultrasonic wave (DUW), referring to later arrival wave packets, is chosen in this study to evaluate structural conditions of railway tracks due to its high sensitivity, wider sensing range, and easy implementation. Damages with different sizes and locations are introduced on the railway track to validate the sensitivity and sensing range of the proposed method. Two damage indices are defined from the perspective of energy attenuation and waveform distortion. The experimental results demonstrate that the DUW signals received by the hybrid sensing system could be used for damage detection of the railway tracks and the waveform-distortion-based index is more efficient than the energy-based index. Introduction Rail transit has developed dramatically worldwide due to its convenience for people's daily lives. However, railway tracks are fragile in regard to defects because of high-speed operation, heavy loads, environmental exposure, and unpredictable impacts. Typical defects of railway tracks are shown in Figure 1. Catastrophic accidents may occur if defects cannot be detected [1,2]. Many nondestructive testing (NDT) techniques [3][4][5][6] have been explored and applied in the daily inspection of railway tracks combined with manual inspection. Among them, the ultrasonic bulk wave method with devices installed on the track inspection vehicle is well commercialized in routine inspection and maintenance for railway tracks [7]. Nevertheless, implementation of regular NDT techniques needs to interrupt the normal operation of the railway system, which is inconvenient, time consuming, and insecure for the inspectors. Furthermore, NDT techniques cannot monitor the conditions of the railway tracks in real time and provide timely alarms. To improve the drawbacks of NDT techniques, acoustic emission (AE) and ultrasonic guided wave-based methods have attracted more attention in past years. AE has shown its effectiveness in railway crack monitoring [8,9]. However, AE signals suffer a low signal-noise ratio (SNR) and are insensitive to cracks that expand at a low rate. Although artificial intelligence techniques have been applied to AE wave classification and mass data processing [10,11], AE techniques still face the problem of ambient noise. Ultrasonic guided wave method adopts an active manner to monitor structural conditions, which makes this method immune to most noise in condition monitoring of the railway system [12][13][14][15][16]. The ultrasonic guided wave is excited at a well-selected frequency and interacts with the defects. The wave reflection, transmission, mode conversion, and energy loss can be used for damage detection. However, multimode and dispersive features of ultrasonic guided wave in railway tracks make it difficult to extract damage information from recorded signals [17]. In addition, this method utilizes direct wave (first arriving wave packets) for damage detection, which leads to low sensitivity to incipient damage and limited sensing range. All the above factors impede wider application of this method to condition monitoring of railway tracks. Different from the ultrasonic guided wave method, the diffuse ultrasonic wave (DUW)-based method utilizes later arrived wave packets (diffuse/coda wave) to monitor structural conditions. DUW has been neglected in past research due to its noise-like appearance. However, it was recently found that DUW is highly repeatable and carries more information about the medium [18,19]. DUW is very sensitive to small changes in the medium since it propagates for longer propagation distance and interacts with scattering sources (defects) multiple times. Compared to the direct wave, the DUW received by the sensor is the superposition of waves from all directions, which leads to the wider sensing range [20,21]; on the other hand, multiple scattering events make the DUW sensitive to small perturbations of the materials [22,23]. DUW was first explored in geological engineering to identify slight changes of the earth's crust by seismologists [24]. Recently, efforts have been made to apply DUW to damage detection and condition monitoring of concrete materials [25][26][27], composite structures [23,28,29], and metallic structures [30,31]. Liu et al. [25] utilized DUW for self-healing process monitoring of concrete where biomineralization was used to repair internal cracks. The results indicated that the relative velocity change of the DUW could reveal the strength development of the self-healing concrete. Lim et al. [23] applied DUW to early-stage fatigue damage detection and crack growth monitoring of carbon-fiber-reinforced polymer (CFRP) composite plate. The results showed that time domain distortion of DUW signals could be used to assess fatigue damage of the CFRP plate. Ahn et al. [26] utilized DUW to evaluate distributed cracks in concrete. The results demonstrated that both diffusivity and dissipation coefficients of UGW signals could be used for micro cracks detection. The feasibility of DUW for damage detection has also been demonstrated on woven fabric composite structures [28] and aeronautical honeycomb composite sandwich structures [29]. In terms of metal structures, Xie et al. [30,31] To improve the drawbacks of NDT techniques, acoustic emission (AE) and ultrasonic guided wave-based methods have attracted more attention in past years. AE has shown its effectiveness in railway crack monitoring [8,9]. However, AE signals suffer a low signal-noise ratio (SNR) and are insensitive to cracks that expand at a low rate. Although artificial intelligence techniques have been applied to AE wave classification and mass data processing [10,11], AE techniques still face the problem of ambient noise. Ultrasonic guided wave method adopts an active manner to monitor structural conditions, which makes this method immune to most noise in condition monitoring of the railway system [12][13][14][15][16]. The ultrasonic guided wave is excited at a well-selected frequency and interacts with the defects. The wave reflection, transmission, mode conversion, and energy loss can be used for damage detection. However, multimode and dispersive features of ultrasonic guided wave in railway tracks make it difficult to extract damage information from recorded signals [17]. In addition, this method utilizes direct wave (first arriving wave packets) for damage detection, which leads to low sensitivity to incipient damage and limited sensing range. All the above factors impede wider application of this method to condition monitoring of railway tracks. Different from the ultrasonic guided wave method, the diffuse ultrasonic wave (DUW)based method utilizes later arrived wave packets (diffuse/coda wave) to monitor structural conditions. DUW has been neglected in past research due to its noise-like appearance. However, it was recently found that DUW is highly repeatable and carries more information about the medium [18,19]. DUW is very sensitive to small changes in the medium since it propagates for longer propagation distance and interacts with scattering sources (defects) multiple times. Compared to the direct wave, the DUW received by the sensor is the superposition of waves from all directions, which leads to the wider sensing range [20,21]; on the other hand, multiple scattering events make the DUW sensitive to small perturbations of the materials [22,23]. DUW was first explored in geological engineering to identify slight changes of the earth's crust by seismologists [24]. Recently, efforts have been made to apply DUW to damage detection and condition monitoring of concrete materials [25][26][27], composite structures [23,28,29], and metallic structures [30,31]. Liu et al. [25] utilized DUW for self-healing process monitoring of concrete where biomineralization was used to repair internal cracks. The results indicated that the relative velocity change of the DUW could reveal the strength development of the self-healing concrete. Lim et al. [23] applied DUW to early-stage fatigue damage detection and crack growth monitoring of carbon-fiber-reinforced polymer (CFRP) composite plate. The results showed that time domain distortion of DUW signals could be used to assess fatigue damage of the CFRP plate. Ahn et al. [26] utilized DUW to evaluate distributed cracks in concrete. The results demonstrated that both diffusivity and dissipation coefficients of UGW signals could be used for micro cracks detection. The feasibility of DUW for damage detection has also been demonstrated on woven fabric composite structures [28] and aeronautical honeycomb composite sandwich structures [29]. In terms of metal structures, Xie et al. [30,31] proposed a DUW-based method to monitor temperature variations and thermal-shock-induced microstructural alterations in steel specimens. It can be seen from the above research that DUW has been widely studied on condition monitoring, such as distributed cracks and microstructural changes of the medium. The corresponding results proved that DUW has good performance in quantifying these parameter changes. However, miniature local damage is more commonly seen and is essential for infrastructure. Recently, local damage detection using DUW has attracted more attention. Pacheco and Snieder [32] developed the DUW technique for local damage evaluation, but this method may not be adequate for small localized damages [33]. Fröjd and Ulriksen [21] utilized decorrelation of DUW signals in a specific time window to evaluate local holes in a concrete floor slab. However, the decorrelation coefficient calculation using a specific time window may not be as robust as using longer segments of recorded signal [34]. Michaels et al. [35] proposed local temporal coherence (LTC) to detect local damage in aluminum plate. It was found that LTC had good performance in small defect detection. Fröjd and Ulriksen [36] combined amplitude and phase information by establishing the Mahalanobis model to evaluate damages induced by impact hits on concrete slabs. The studies above have shown the sensitivity of DUW to miniature and local changes in concrete and composite structures. However, few studies have explored the application of DUW to damage detection in railway tracks. Wang et al. [37,38] innovatively applied DUW to condition monitoring of railway turnouts. Remnant cross-correlation coefficients of the DUW signals were extracted to detect defects. This pioneering work provides a benchmark-free method for condition monitoring of railway turnouts. Nevertheless, the sensitivity to local damage and wide-range sensing capacity of DUW on railway tracks are not fully studied in this research. Lead-zirconate-titanate (PZT) sensor is adequate for ultrasonic wave detection under normal conditions [39]. However, the railway system usually has strong electromagnetic interference (EMI), which might reduce the SNR of ultrasonic signals received by the PZT sensor. On the other hand, the connection wires of the PZT sensor for voltage delivery limit its sensing range and multipoint installation for distributed sensing. Furthermore, the dielectric constant and silver cladding of the PZT sensors will be easily degraded under long-term environmental exposure [40]. These factors impede the wider application of PZT sensor on ultrasonic sensing of railway tracks. Recently, fiber Bragg grating (FBG) sensors have been explored to receive ultrasonic waves due to their advantages of being lightweight, having the potential to multiplex and be immune to EMI, moisture, and high temperatures. Moreover, the FBG sensor is applicable for locations with complex shapes where PZT transducers are hard to access. Tian et al. [41] established FBG array to receive Lamb wave and visualize damages on aluminum plate. Wang and Wu [42,43] applied phase-shift FBG sensor to obtain ultrasonic signals, and the nonlinearity of ultrasound was utilized to evaluate fatigue cracks. Yu et al. [44] took advantage of the high-temperature resistance of FBG sensors to detect damage at very high temperature. The research above proved the FBG sensor's immunity to harsh environments and high sensitivity to ultrasonic wave. Cano et al. [45] successfully verified the feasibility of FBG sensors for receiving ultrasonic waves on subway rail specimens. Wang et al. [46] explored the optimal excitation frequency of ultrasonic guided waves for damage detection on rails by using FBG sensors. However, the application of FBG sensors for DUW-based damage detection on railway tracks has not yet been investigated. In this paper, a hybrid sensing system with PZT actuator and FBG sensor is proposed to obtain DUW on railway tracks for damage detection. Laboratory tests are conducted on a segment of a 60 kg/m railway track to investigate the sensitivity and sensing range of DUW. Damage indices based on energy attenuation and waveform distortion of DUW signals are proposed and validated to quantify different damage levels. This work will contribute a new sensing system for damage detection of railway tracks and provide a deep understanding of the interaction between damage and DUW. Methodology DUW is chosen to monitor the conditions of railway tracks based on the PZT/FBG hybrid sensing system. Different damage levels are introduced by attaching Blu Tack blocks on the track web to explore the sensitivity and sensing range of this method. Defects will cause not only energy attenuation but also waveform distortion of the DUW signal. Therefore, energy-based and waveform distortion-based damage indices are respectively defined to indicate conditions of the railway track. A flowchart of the proposed method is presented in Figure 2. Methodology DUW is chosen to monitor the conditions of railway tracks based on the PZT/FBG hybrid sensing system. Different damage levels are introduced by attaching Blu Tack blocks on the track web to explore the sensitivity and sensing range of this method. Defects will cause not only energy attenuation but also waveform distortion of the DUW signal. Therefore, energy-based and waveform distortion-based damage indices are respectively defined to indicate conditions of the railway track. A flowchart of the proposed method is presented in Figure 2. Working Principle of FBG Sensor Fiber Bragg grating (FBG) is a fiber optic sensor that has periodic changes in the refractive index of the fiber core. The periodical grating structures could act as a narrowband filter, and the wavelength of reflected light is called the Bragg wavelength, which can be expressed by: where λb is the Bragg wavelength of FBG, n is the effective refractive index of the optical fiber, and Λ is the grating period. When broadband light is propagated into the FBG sensor, light with central wavelength λb will be reflected, while other components will be transmitted through the grating, as shown in Figure 3. Working Principle of FBG Sensor Fiber Bragg grating (FBG) is a fiber optic sensor that has periodic changes in the refractive index of the fiber core. The periodical grating structures could act as a narrowband filter, and the wavelength of reflected light is called the Bragg wavelength, which can be expressed by: where λ b is the Bragg wavelength of FBG, n is the effective refractive index of the optical fiber, and Λ is the grating period. When broadband light is propagated into the FBG sensor, light with central wavelength λ b will be reflected, while other components will be transmitted through the grating, as shown in Figure 3. Methodology DUW is chosen to monitor the conditions of railway tracks based on the PZT/FBG hybrid sensing system. Different damage levels are introduced by attaching Blu Tack blocks on the track web to explore the sensitivity and sensing range of this method. Defects will cause not only energy attenuation but also waveform distortion of the DUW signal. Therefore, energy-based and waveform distortion-based damage indices are respectively defined to indicate conditions of the railway track. A flowchart of the proposed method is presented in Figure 2. Working Principle of FBG Sensor Fiber Bragg grating (FBG) is a fiber optic sensor that has periodic changes in the refractive index of the fiber core. The periodical grating structures could act as a narrowband filter, and the wavelength of reflected light is called the Bragg wavelength, which can be expressed by: where λb is the Bragg wavelength of FBG, n is the effective refractive index of the optical fiber, and Λ is the grating period. When broadband light is propagated into the FBG sensor, light with central wavelength λb will be reflected, while other components will be transmitted through the grating, as shown in Figure 3. Micro vibration induced by the ultrasonic wave will cause a Bragg wavelength shift and the relationship between the Bragg wavelength shift and strain along the fiber direction without temperature variation can be represented by [47]: where ∆λ b is the Bragg wavelength shift, C ε is the material constant obtained from calibration experiments, and ε z is the strain along the fiber axis. The conventional optical spectrum analyzer is unqualified to capture high-frequency vibration induced by ultrasonic wave due to the limitation of the demodulation speed. The two most prevailing FBG demodulation techniques for ultrasonic detection are intensity demodulation technique and edge filter demodulation technique [48]. The light source for intensity demodulation is broadband light source, while for edge filter demodulation, it is narrowband light source. Even though intensity demodulation has potential application for multiplexing, its SNR is relatively low. On the other hand, the edge filter technique has been widely used in ultrasonic detection due to the high signal quality [49][50][51]. The demodulation principle of edge filter technique is adopted in this study and can be explained in Figure 4. The light source wavelength is locked at the 3 dB point of the FBG spectrum. The intensity of the reflected light will change with the Bragg wavelength shift induced by micro vibration and be proportional to the amplitude of the ultrasonic wave. Micro vibration induced by the ultrasonic wave will cause a Bragg wavelength shift and the relationship between the Bragg wavelength shift and strain along the fiber direction without temperature variation can be represented by [47]: where Δλb is the Bragg wavelength shift, Cε is the material constant obtained from calibration experiments, and εz is the strain along the fiber axis. The conventional optical spectrum analyzer is unqualified to capture high-frequency vibration induced by ultrasonic wave due to the limitation of the demodulation speed. The two most prevailing FBG demodulation techniques for ultrasonic detection are intensity demodulation technique and edge filter demodulation technique [48]. The light source for intensity demodulation is broadband light source, while for edge filter demodulation, it is narrowband light source. Even though intensity demodulation has potential application for multiplexing, its SNR is relatively low. On the other hand, the edge filter technique has been widely used in ultrasonic detection due to the high signal quality [49][50][51]. The demodulation principle of edge filter technique is adopted in this study and can be explained in Figure 4. The light source wavelength is locked at the 3 dB point of the FBG spectrum. The intensity of the reflected light will change with the Bragg wavelength shift induced by micro vibration and be proportional to the amplitude of the ultrasonic wave. It has been proven that the dominant noise of the FBG ultrasonic sensing system is laser intensity noise [44]. To reduce this noise and improve the SNR, a balanced photodetector (BPD) is utilized to receive both reflection and transmission signals of the FBG. The voltage obtained by two parts of BPD would simultaneously experience changes with the same amplitudes but opposite phases [52]. Therefore, BPD could double the amplitude of the signal while removing noise when both transmitting and reflecting light pass into the connectors. The voltage obtained by BPD could be given by: where V is the output voltage of the BPD, G is the grating slope, RD and g are the response and gain factor of the BPD, respectively, and P is the laser power of the tunable laser resource. DUW Propagation in Railway Track Many studies have utilized direct wave to obtain clear responses by generating and extracting pure mode of ultrasonic guided waves [53,54]. However, methods for exciting the ideal mode and minimizing dispersion in railway tracks need to be further studied [55]. It has been proven that the dominant noise of the FBG ultrasonic sensing system is laser intensity noise [44]. To reduce this noise and improve the SNR, a balanced photodetector (BPD) is utilized to receive both reflection and transmission signals of the FBG. The voltage obtained by two parts of BPD would simultaneously experience changes with the same amplitudes but opposite phases [52]. Therefore, BPD could double the amplitude of the signal while removing noise when both transmitting and reflecting light pass into the connectors. The voltage obtained by BPD could be given by: where V is the output voltage of the BPD, G is the grating slope, R D and g are the response and gain factor of the BPD, respectively, and P is the laser power of the tunable laser resource. DUW Propagation in Railway Track Many studies have utilized direct wave to obtain clear responses by generating and extracting pure mode of ultrasonic guided waves [53,54]. However, methods for exciting the ideal mode and minimizing dispersion in railway tracks need to be further studied [55]. Different from the direct wave, the DUW-based method utilizes later wave packets that are reflected and scattered multiple times. A schematic illustration of the diffuse ultrasonic wave field on a railway track is shown in Figure 5. The low acoustic attenuation coefficient of steel allows ultrasonic wave to propagate for a longer time. The DUW method utilizes the change of diffused ultrasonic wave field that consists of many modes for damage detection. Different from the direct wave, the DUW-based method utilizes later wave packets that are reflected and scattered multiple times. A schematic illustration of the diffuse ultrasonic wave field on a railway track is shown in Figure 5. The low acoustic attenuation coefficient of steel allows ultrasonic wave to propagate for a longer time. The DUW method utilizes the change of diffused ultrasonic wave field that consists of many modes for damage detection. Energy-Based Damage Index of Diffuse Wave The energy-based damage index has been widely used in ultrasound-based damage detection [40,56]. As discussed above, local damage in the medium will lead to energy attenuation of DUW. In this research, a damage index derived from wavelet-packet-based energy (WPE) analysis [57,58] is proposed to evaluate the damage severity of railway tracks. In general, the DUW signal X will be decomposed into 2 frequency bands by n-level wavelet packet decomposition. The j-th frequency band Xj could be given by: where j represents frequency bands varying from 1 to 2 , and m is the total amount of data. Then, the wavelet packet energy of the j-th frequency band Ej can be calculated by: The energy vector E of the DUW is given by: Then, the energy-based damage index (EDI) based on WPE can be further defined by: where Eintact,j is the energy of the j-th frequency band. If no change occurs in the medium, Eintact,j is close to Ej, and the EDI will approach 0. If a strong scattering source exists in the medium, Ej will be strongly different from Eintact,j, and the EDI will approach 1. Waveform-Distortion-Based Damage Index of Diffuse Wave DUW has been widely used in global damage detection by using time delay or relative velocity change to quantify the global compressing or stretching of DUW waveforms. However, those studies are established on the assumption that changes in the medium are global, which is inadequate for local damage detection. Local temporal coherence (LTC), which has shown its effectiveness in quantitatively describing signal changes induced by local damage [35], is defined as: Energy-Based Damage Index of Diffuse Wave The energy-based damage index has been widely used in ultrasound-based damage detection [40,56]. As discussed above, local damage in the medium will lead to energy attenuation of DUW. In this research, a damage index derived from wavelet-packet-based energy (WPE) analysis [57,58] is proposed to evaluate the damage severity of railway tracks. In general, the DUW signal X will be decomposed into 2 n frequency bands by n-level wavelet packet decomposition. The j-th frequency band X j could be given by: where j represents frequency bands varying from 1 to 2 n , and m is the total amount of data. Then, the wavelet packet energy of the j-th frequency band E j can be calculated by: The energy vector E of the DUW is given by: Then, the energy-based damage index (EDI) based on WPE can be further defined by: where E intact,j is the energy of the j-th frequency band. If no change occurs in the medium, E intact,j is close to E j , and the EDI will approach 0. If a strong scattering source exists in the medium, E j will be strongly different from E intact,j , and the EDI will approach 1. Waveform-Distortion-Based Damage Index of Diffuse Wave DUW has been widely used in global damage detection by using time delay or relative velocity change to quantify the global compressing or stretching of DUW waveforms. However, those studies are established on the assumption that changes in the medium are global, which is inadequate for local damage detection. Local temporal coherence (LTC), which has shown its effectiveness in quantitatively describing signal changes induced by local damage [35], is defined as: where X 1 (t) is the baseline signal and X 2 (t) is the signal obtained from the damaged condition. LTC values quantify the correlation between two signals in a time window. The damaged signal is first translated by τ in time domain, and similarity of the two signals is calculated in time window [T 0 − T, T 0 + T]. τ values ranging from −2.5 µs to 2.5 µs in step of 0.005 µs are used to calculate LTC in this study. The length of the time window is set to 0.4 ms, which is 10 times the excitation signal length. The time window is moved along the time axis in step of 0.05 ms to obtain the LTC values of the entire signal. An example envelope of LTC between the baseline signal and measured signal is shown in Figure 6. It is noted that waveforms in the first 1 ms are discarded since the ultrasonic wave at the very beginning has low sensitivity to damage. where X1(t) is the baseline signal and X2(t) is the signal obtained from the damaged condition. LTC values quantify the correlation between two signals in a time window. The damaged signal is first translated by τ in time domain, and similarity of the two signals is calculated in time window [T0 − T, T0 + T]. τ values ranging from −2.5 μs to 2.5 μs in step of 0.005 μs are used to calculate LTC in this study. The length of the time window is set to 0.4 ms, which is 10 times the excitation signal length. The time window is moved along the time axis in step of 0.05 ms to obtain the LTC values of the entire signal. An example envelope of LTC between the baseline signal and measured signal is shown in Figure 6. It is noted that waveforms in the first 1 ms are discarded since the ultrasonic wave at the very beginning has low sensitivity to damage. Peak coherence (PC) represents the maximum LTC value with respect to at each time window, and it can be given by PC values calculated from signals in Figure 6 are shown in Figure 7. Aiming at quantitatively describing the distortion degree of the signals, peak coherence change (PCC) is defined as the difference between the maximum PC value and the average PC value, and it can be given by: It is noted that temperature effect could be discriminated in the process of calculating the PCC value [59]. Therefore, temperature variation could be removed, and only defect information would remain in the PCC values. Peak coherence (PC) represents the maximum LTC value with respect to τ at each time window, and it can be given by PC values calculated from signals in Figure 6 are shown in Figure 7. where X1(t) is the baseline signal and X2(t) is the signal obtained from the damaged condition. LTC values quantify the correlation between two signals in a time window. The damaged signal is first translated by τ in time domain, and similarity of the two signals is calculated in time window [T0 − T, T0 + T]. τ values ranging from −2.5 μs to 2.5 μs in step of 0.005 μs are used to calculate LTC in this study. The length of the time window is set to 0.4 ms, which is 10 times the excitation signal length. The time window is moved along the time axis in step of 0.05 ms to obtain the LTC values of the entire signal. An example envelope of LTC between the baseline signal and measured signal is shown in Figure 6. It is noted that waveforms in the first 1 ms are discarded since the ultrasonic wave at the very beginning has low sensitivity to damage. Peak coherence (PC) represents the maximum LTC value with respect to at each time window, and it can be given by PC values calculated from signals in Figure 6 are shown in Figure 7. Aiming at quantitatively describing the distortion degree of the signals, peak coherence change (PCC) is defined as the difference between the maximum PC value and the average PC value, and it can be given by: It is noted that temperature effect could be discriminated in the process of calculating the PCC value [59]. Therefore, temperature variation could be removed, and only defect information would remain in the PCC values. Aiming at quantitatively describing the distortion degree of the signals, peak coherence change (PCC) is defined as the difference between the maximum PC value and the average PC value, and it can be given by: It is noted that temperature effect could be discriminated in the process of calculating the PCC value [59]. Therefore, temperature variation could be removed, and only defect information would remain in the PCC values. Experiment Procedure To verify the feasibility of applying DUW to damage detection of railway tracks using the PZT/FBG hybrid sensing system and demonstrate its sensing range and sensitivity, a series of laboratory tests is conducted on a section of a 60 kg/m railway track with a 400 mm length. PZT/FBG Hybrid Ultrasonic Sensing System The proposed sensing system in this study consists of the signal generation module and the signal acquisition module, as shown in Figure 8. The ultrasonic wave is generated by the typical PZT-based ultrasonic generating system, and the ultrasonic wave will propagate along the railway track. The micro vibration induced by the ultrasonic wave will be perceived by the FBG sensor, and the structural condition can be evaluated by analyzing the response signal. Experiment Procedure To verify the feasibility of applying DUW to damage detection of railway tracks using the PZT/FBG hybrid sensing system and demonstrate its sensing range and sensitivity, a series of laboratory tests is conducted on a section of a 60 kg/m railway track with a 400 mm length. PZT/FBG Hybrid Ultrasonic Sensing System The proposed sensing system in this study consists of the signal generation module and the signal acquisition module, as shown in Figure 8. The ultrasonic wave is generated by the typical PZT-based ultrasonic generating system, and the ultrasonic wave will propagate along the railway track. The micro vibration induced by the ultrasonic wave will be perceived by the FBG sensor, and the structural condition can be evaluated by analyzing the response signal. The detailed working procedure of the whole PZT/FBG hybrid ultrasonic sensing system is as follows: A 250 kHz, ten-cycle sinusoidal tone burst modulated by a Hanning window is first generated by an arbitrary waveform generator (PXI-5412, National Instruments, Austin, TX, USA), as shown in Figure 9. The excitation signal is amplified 200 times using a linear power amplifier (HVA-400-A, Ciprian, La Tronche, France). The amplified signal is sent to the PZT disc (diameter: 8 mm, thickness: 1 mm) to generate ultrasonic waves. An optical spectrum analyzer (AQ6370D, Yokogawa, Tokyo, Japan) is used to obtain the reflecting spectrum of FBG, and a tunable laser (TLB-6700, Newport, RI, USA) is utilized to emit a narrowband light source according to the FBG reflecting spectrum. In this study, the laser wavelength is set at 1556.07 nm, which is the 3 dB position on the left-hand side of the FBG spectrum. The micro strain induced by ultrasonic The detailed working procedure of the whole PZT/FBG hybrid ultrasonic sensing system is as follows: A 250 kHz, ten-cycle sinusoidal tone burst modulated by a Hanning window is first generated by an arbitrary waveform generator (PXI-5412, National Instruments, Austin, TX, USA), as shown in Figure 9. The excitation signal is amplified 200 times using a linear power amplifier (HVA-400-A, Ciprian, La Tronche, France). The amplified signal is sent to the PZT disc (diameter: 8 mm, thickness: 1 mm) to generate ultrasonic waves. An optical spectrum analyzer (AQ6370D, Yokogawa, Tokyo, Japan) is used to obtain the reflecting spectrum of FBG, and a tunable laser (TLB-6700, Newport, RI, USA) is utilized to emit a narrowband light source according to the FBG reflecting spectrum. In this study, the laser wavelength is set at 1556.07 nm, which is the 3 dB position on the left-hand side of the FBG spectrum. The micro strain induced by ultrasonic waves will shift the Bragg wavelength and cause changes in the optical intensity. Both the transmitted and reflected light of the FBG sensor are guided into two parts of the BPD (2117-FC, Newport, RI, USA), which converts the optical signal into a voltage signal. The voltage signals are obtained by the oscilloscope (PXI-5412, National Instruments, Austin, TX, USA) and the sampling frequency of the oscilloscope is set at 20 MHz. Considering that the frequency of the excitation signal is 250 kHz, the wavelength of the ultrasonic wave with different modes ranges from approximately 10 mm to 20 mm, and an FBG sensor with a 10 mm grating length is adopted in this study. The whole procedure is controlled by LABVIEW software, and data analysis is conducted in MATLAB software. Experiment Setup Both PZT actuator and FBG sensor are attached on the rail web using epoxy adhesive, and the distance between the PZT and FBG is 200 mm. Blu Tack blocks are utilized to introduce damages in the rail web, which has demonstrated effectiveness in simulating defects in many studies [60,61]. The diameters of defects are set as 2.5 mm, 5 mm, 7.5 mm, 10 mm, 20 mm, and 30 mm. Defects smaller than 10 mm are considered sub-wavelength defects, while defects larger than 10 mm are regarded as over-wavelength defects. Damages are set at four different locations on the rail web. As shown in Figure 10, site 1 and site 2 are located at the direct sensing path between the PZT actuator and FBG sensor. Sites 3 and 4 are placed outside the direct sensing path. The intact railway track is first tested to obtain the baseline signals. Measurements are then conducted on each damaged condition. A thermocouple is placed on the rail surface to measure the temperature, and an air conditioner is used to ensure that the room temperature is constant for all tests. It needs to be noted that the thermal variations in practice will shift the FBG peak, which may affect the reflective intensity of signals. To eliminate the temperature effect, a preliminary test can be conducted to obtain the central wavelength of FBG before each ultrasonic measurement. The 3 dB point of the FBG spectrum can then be determined accurately for adjusting the laser wavelength. Considering that the frequency of the excitation signal is 250 kHz, the wavelength of the ultrasonic wave with different modes ranges from approximately 10 mm to 20 mm, and an FBG sensor with a 10 mm grating length is adopted in this study. The whole procedure is controlled by LABVIEW software, and data analysis is conducted in MATLAB software. Experiment Setup Both PZT actuator and FBG sensor are attached on the rail web using epoxy adhesive, and the distance between the PZT and FBG is 200 mm. Blu Tack blocks are utilized to introduce damages in the rail web, which has demonstrated effectiveness in simulating defects in many studies [60,61]. The diameters of defects are set as 2.5 mm, 5 mm, 7.5 mm, 10 mm, 20 mm, and 30 mm. Defects smaller than 10 mm are considered sub-wavelength defects, while defects larger than 10 mm are regarded as over-wavelength defects. Damages are set at four different locations on the rail web. As shown in Figure 10, site 1 and site 2 are located at the direct sensing path between the PZT actuator and FBG sensor. Sites 3 and 4 are placed outside the direct sensing path. waves will shift the Bragg wavelength and cause changes in the optical intensity. Both the transmitted and reflected light of the FBG sensor are guided into two parts of the BPD (2117-FC, Newport, RI, USA), which converts the optical signal into a voltage signal. The voltage signals are obtained by the oscilloscope (PXI-5412, National Instruments, Austin, TX, USA) and the sampling frequency of the oscilloscope is set at 20 MHz. Considering that the frequency of the excitation signal is 250 kHz, the wavelength of the ultrasonic wave with different modes ranges from approximately 10 mm to 20 mm, and an FBG sensor with a 10 mm grating length is adopted in this study. The whole procedure is controlled by LABVIEW software, and data analysis is conducted in MATLAB software. Experiment Setup Both PZT actuator and FBG sensor are attached on the rail web using epoxy adhesive, and the distance between the PZT and FBG is 200 mm. Blu Tack blocks are utilized to introduce damages in the rail web, which has demonstrated effectiveness in simulating defects in many studies [60,61]. The diameters of defects are set as 2.5 mm, 5 mm, 7.5 mm, 10 mm, 20 mm, and 30 mm. Defects smaller than 10 mm are considered sub-wavelength defects, while defects larger than 10 mm are regarded as over-wavelength defects. Damages are set at four different locations on the rail web. As shown in Figure 10, site 1 and site 2 are located at the direct sensing path between the PZT actuator and FBG sensor. Sites 3 and 4 are placed outside the direct sensing path. The intact railway track is first tested to obtain the baseline signals. Measurements are then conducted on each damaged condition. A thermocouple is placed on the rail surface to measure the temperature, and an air conditioner is used to ensure that the room temperature is constant for all tests. It needs to be noted that the thermal variations in practice will shift the FBG peak, which may affect the reflective intensity of signals. To eliminate the temperature effect, a preliminary test can be conducted to obtain the central wavelength of FBG before each ultrasonic measurement. The 3 dB point of the FBG spectrum can then be determined accurately for adjusting the laser wavelength. The intact railway track is first tested to obtain the baseline signals. Measurements are then conducted on each damaged condition. A thermocouple is placed on the rail surface to measure the temperature, and an air conditioner is used to ensure that the room temperature is constant for all tests. It needs to be noted that the thermal variations in practice will shift the FBG peak, which may affect the reflective intensity of signals. To eliminate the temperature effect, a preliminary test can be conducted to obtain the central wavelength of FBG before each ultrasonic measurement. The 3 dB point of the FBG spectrum can then be determined accurately for adjusting the laser wavelength. Experimental Results and Discussion Signals with a length of 30 ms are recorded in each measurement. Each condition is measured ten times, and the signals are filtered and smoothed using a Butterworth band-pass filter through a toolbox in MATLAB. The filtered signals in each condition are further averaged to reduce stochastic noise. Experiments are first conducted on site 1 and site 2 to demonstrate the feasibility of the proposed sensing system when damage is located along the direct sensing path. DUW obtained from site 1 and site 2 is shown in Figures 11 and 12. In general, the differences between the disturbed signals and baseline signals become larger with increasing damage size at both site 1 and site 2. A detailed presentation of the signals in Figure 12e is shown in Figure 13. It is obvious that direct wave (Figure 13b) is almost identical while DUW signals (Figure 13c) vary in both amplitude and phase. The comparison between direct wave and DUW shows the high sensitivity of DUW for damage detection. Experimental Results and Discussion Signals with a length of 30 ms are recorded in each measurement. Each condition is measured ten times, and the signals are filtered and smoothed using a Butterworth band-pass filter through a toolbox in MATLAB. The filtered signals in each condition are further averaged to reduce stochastic noise. Experiments are first conducted on site 1 and site 2 to demonstrate the feasibility of the proposed sensing system when damage is located along the direct sensing path. DUW obtained from site 1 and site 2 is shown in Figures 11 and 12. In general, the differences between the disturbed signals and baseline signals become larger with increasing damage size at both site 1 and site 2. A detailed presentation of the signals in Figure 12e is shown in Figure 13. It is obvious that direct wave (Figure 13b) is almost identical while DUW signals (Figure 13c) vary in both amplitude and phase. The comparison between direct wave and DUW shows the high sensitivity of DUW for damage detection. For damage quantification, the energy-based damage index (EDI) is obtained for different configurations, as shown in Equation (7). Since the differences between the measured and baseline signals increase with time, which demonstrates that damage information accumulates in the wave propagation process, different segments of signals to be analyzed may lead to different results. Therefore, three configurations of time window, including 1-10 ms, 10-20 ms, and 20-30 ms, are utilized to calculate the EDI values. For damage quantification, the energy-based damage index (EDI) is obtained for different configurations, as shown in Equation (7). Since the differences between the measured and baseline signals increase with time, which demonstrates that damage information accumulates in the wave propagation process, different segments of signals to be analyzed may lead to different results. Therefore, three configurations of time window, including 1-10 ms, 10-20 ms, and 20-30 ms, are utilized to calculate the EDI values. For damage quantification, the energy-based damage index (EDI) is obtained for different configurations, as shown in Equation (7). Since the differences between the measured and baseline signals increase with time, which demonstrates that damage information accumulates in the wave propagation process, different segments of signals to be analyzed may lead to different results. Therefore, three configurations of time window, including 1-10 ms, 10-20 ms, and 20-30 ms, are utilized to calculate the EDI values. As shown in Figure 14, the calculated EDI values do not monotonically increase with damage size. Specifically, EDI values fluctuate when the defect sizes are smaller than 10 mm and increase significantly in 20 mm and 30 mm conditions in all the three time-window configurations. The reason is that over-wavelength defects cause more significant energy attenuation than sub-wavelength ones, and EDI could reflect the global features rather than the detailed information of the signals, which makes it adequate for severe damage detection but inefficient in quantifying the incipient damage. As shown in Figure 14, the calculated EDI values do not monotonically increase with damage size. Specifically, EDI values fluctuate when the defect sizes are smaller than 10 mm and increase significantly in 20 mm and 30 mm conditions in all the three time-window configurations. The reason is that over-wavelength defects cause more significant energy attenuation than sub-wavelength ones, and EDI could reflect the global features rather than the detailed information of the signals, which makes it adequate for severe damage detection but inefficient in quantifying the incipient damage. Figure 15 shows the variations of DUW signals in a specific time window of site 1 and site 2 with different damage sizes. It is obvious that waveforms in this time window are locally distorted rather than globally stretched or compressed. On the other hand, defects with different sizes will distort the waveform to different degrees. The signals in the time domain show that railway track damage might be quantitatively described by evaluating the waveform distortion degrees of DUW signals. Figure 15 shows the variations of DUW signals in a specific time window of site 1 and site 2 with different damage sizes. It is obvious that waveforms in this time window are locally distorted rather than globally stretched or compressed. On the other hand, defects with different sizes will distort the waveform to different degrees. The signals in the time domain show that railway track damage might be quantitatively described by evaluating the waveform distortion degrees of DUW signals. The LTC values, which are obtained from the coherence of the measured signal and the baseline signal, are then extracted under every condition using the method proposed in Section 2.2.3. PC and PCC values of signals are calculated based on LTC values to quantitatively present variations of DUW signals under different damage sizes. As shown in Figure 16, PC values drop as a function of time, which meets well with the results obtained by Michaels [36] and Lu [60] and indicates that defect information accumulates in the process of DUW propagation. Moreover, PC values decrease with increasing defect size at both site 1 and site 2. The LTC values, which are obtained from the coherence of the measured signal and the baseline signal, are then extracted under every condition using the method proposed in Section 2.2.3. PC and PCC values of signals are calculated based on LTC values to quantitatively present variations of DUW signals under different damage sizes. As shown in Figure 16, PC values drop as a function of time, which meets well with the results obtained by Michaels [36] and Lu [60] and indicates that defect information accumulates in the process of DUW propagation. Moreover, PC values decrease with increasing defect size at both site 1 and site 2. The PCC values are then calculated from PC results to quantify the damages in site 1 and site 2, and the results are shown in Figure 17. In general, PCC values increase with the damage size in all the damage conditions, which proves that PCC is more effective than EDI in damage quantification. Specifically, PCC develops slowly when the defect diameter is smaller than 10 mm. On the one hand, the increasing speed of damage from 2.5 mm to 10 mm is relatively low, which causes the slow development of PCC values; on the other hand, multiple interactions between the waves and defects cause the subtle changes to be perceived by DUW. The PCC increases dramatically when the damage sizes are larger than 10 mm. The reason is that defects will strongly interact with DUW when the sizes are close to the wavelength of ultrasonic waves. The results in Figure 17 demonstrate that the damages located at the direct sensing path could be detected by using the proposed method. DUW propagates through not only the direct path between PZT and FBG but also the paths outside the direct path, which results in DUW being able to detect damage beyond the direct path. Therefore, the PCC values of signals obtained from site 3 and site 4 are obtained to investigate the sensing range of DUW, as shown in Figure 18. PCC values increase with the damage size in both site 3 and site 4, which verifies the wide sensing capacity of DUW. The PCC values are then calculated from PC results to quantify the damages in site 1 and site 2, and the results are shown in Figure 17. In general, PCC values increase with the damage size in all the damage conditions, which proves that PCC is more effective than EDI in damage quantification. Specifically, PCC develops slowly when the defect diameter is smaller than 10 mm. On the one hand, the increasing speed of damage from 2.5 mm to 10 mm is relatively low, which causes the slow development of PCC values; on the other hand, multiple interactions between the waves and defects cause the subtle changes to be perceived by DUW. The PCC increases dramatically when the damage sizes are larger than 10 mm. The reason is that defects will strongly interact with DUW when the sizes are close to the wavelength of ultrasonic waves. The results in Figure 17 demonstrate that the damages located at the direct sensing path could be detected by using the proposed method. Conclusions This study explores DUW for damage detection of railway tracks using a PZT/FBG hybrid sensing system. The sensitivity and sensing range of the proposed method are investigated by a series of experiments. The conclusions of this research can be summarized as follows: (1) The PZT/FBG hybrid sensing system is adequate for damage detection in railway tracks. The sensitivity of direct wave and DUW is compared in the time domain. DUW propagates through not only the direct path between PZT and FBG but also the paths outside the direct path, which results in DUW being able to detect damage beyond the direct path. Therefore, the PCC values of signals obtained from site 3 and site 4 are obtained to investigate the sensing range of DUW, as shown in Figure 18. PCC values increase with the damage size in both site 3 and site 4, which verifies the wide sensing capacity of DUW. Conclusions This study explores DUW for damage detection of railway tracks using a PZT/FBG hybrid sensing system. The sensitivity and sensing range of the proposed method are investigated by a series of experiments. The conclusions of this research can be summarized as follows: (1) The PZT/FBG hybrid sensing system is adequate for damage detection in railway tracks. The sensitivity of direct wave and DUW is compared in the time domain. Conclusions This study explores DUW for damage detection of railway tracks using a PZT/FBG hybrid sensing system. The sensitivity and sensing range of the proposed method are investigated by a series of experiments. The conclusions of this research can be summarized as follows: (1) The PZT/FBG hybrid sensing system is adequate for damage detection in railway tracks. The sensitivity of direct wave and DUW is compared in the time domain. Variations of DUW signals are much larger than direct wave signals, which demonstrates the higher sensitivity of DUW than direct wave for damage detection. (2) The energy-based damage index, EDI, is first defined and utilized to quantify the damage severity at sites 1 and 2. EDI is sufficient to evaluate over-wavelength defects but is not adequate for sub-wavelength defects. (3) The waveform-distortion-based damage index, PCC, is defined and utilized for damage detection on railway tracks. The results show that the PCC values increase with damage size at all four sites, and the damage index is efficient for damage detection. In the future, environmental effects such as temperature and moisture on the proposed system need to be studied. Advanced signal processing techniques should be considered to make full use of the abundant information in diffuse ultrasonic signals. Longer railway tracks also need to be studied to explore the sensing range along the direction length of railway tracks of the proposed method.
2022-03-27T15:24:43.545Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "2bcd2f72c61a75bbf29dcfb71c2625a9a9ef7341", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/7/2504/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95081bb8fc023c28dbb0add0c3cbde6e90f7f106", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
149433441
pes2o/s2orc
v3-fos-license
developing an affirmative position statement on sexual and gender diversity for psychology professionals in south africa Background. Against the background of the dominance of patriarchy and heteronormativity in Africa and the resultant stigma, discrimination and victimisation of sexually and gender-diverse people, this article reports on the development of an affirmative position statement by the Psychological Society of South Africa (PsySSA) for psychology professionals working with sexually and gender-diverse people. The position statement is an attempt to contribute positively to the de-stigmatisation, amongst psychology professionals, of all people with diverse sexual and gender identities. objective. In documenting and reflecting on the process of developing the statement — a first on the African continent — the article aims to contribute to the potential resources available to others in their work on similar projects around the world. design. Although initially intended to be relevant to the African continent, the position statement is appropriate to the South African context specifically, but developed in consultation with a range of stakeholders, also from other African countries. Results. Concerns expressed during stakeholder consultations, and thus taken into account in the development of the statement, include relevance to other African countries, negotiating the politics of representation and language, the importance of including gender and biological variance in addition to sexuality, and the need to be sensitive to how Western influence is constructed in some African contexts. conclusion. Other national psychology organisations stand to benefit by ‘lessons learned’ during this country-specific process with global implications, especially with respect to broadening the lens from lesbian, gay, bisexual, transgender and intersex (LGBTI) to sexual and gender diversity, as well as an acknowledgement of the multiple and fluid developmental pathways around sexuality and gender, in general. Gender diversity.The range of different gender expressions that spans across the historically imposed male-female binary.Referring to 'gender diversity' is generally preferred to 'gender variance' as 'variance' implies an investment in a norm from which some individuals deviate, thereby reinforcing a pathologising treatment of differences among individuals. Intersex.A term referring to a variety of conditions (genetic, physiological or anatomical) in which a person's sexual and/or reproductive features and organs do not conform to dominant and typical definitions of 'female' or 'male' .Such diversity in sex characteristics is also referred to as 'biological variance' , a term which risks reinforcing pathologising treatment of differences among individuals, but which is used with caution in this document to indicate an inclusive grouping of diversity in sexual characteristics, including, but not limited to, intersex indivi duals. Sexual diversity.The range of different expressions of sexual orientation and sexual behaviour that spans across the historically imposed heterosexual-homosexual binary. Sexual orientation.A person's lasting emotional, romantic, sexual or affectional attraction to others (heterosexual, homosexual/same-sex sexual orientation, bisexual or asexual). Transgender.A term for people who have a gender identity and often a gender expression that is different to the sex they were assigned at birth by default of their primary sexual characteristics.The term is also used to refer to people who challenge society's view of gender as fixed, unmoving, dichotomous, and inextricably linked to one's biological sex.Gender is viewed more accurately as a spectrum, rather than a polarised, dichotomous construct.This broad term encompasses transsexuals, gender queers, people who are androgynous, and those who defy what society tells them is appropriate for their gender.Transgender people can be heterosexual, bisexual, homosexual or asexual. Developing an affirmative position statement on sexual and gender diversity … 89 In this article, an initial focus is placed on the process of developing the position statement and the institutional and social background against which the statement was conceived of, before providing a brief introduction to the affirmative stance informing the statement.Specific attention is given to concerns raised during the consultation process by stakeholders, also from other African countries.These concerns, among others, relate to -• relevance to other African countries; • negotiating the politics of representation and language; • the inclusion of gender diversity in addition to sexuality; • sensitivity to multiple and fluid sexual and gender identities; and • the need to be sensitive to how Western influence is constructed in some African contexts. The discussion is rounded off by highlighting the ways in which the position statement has been disseminated and some reactions from practitioners and others in the broader healthcare environment in South Africa, and elsewhere.By reflecting on the process of developing and disseminating the statement the hope is also to contribute to the potential resources available to others globally in their work on similar projects. The Psyssa african lgBti human Rights Project PsySSA is a non-profit, professional association of persons involved in the academic, research and practical application of the discipline of psychology.PsySSA, established in 1994, is the nationally representative learned society for psychology in South Africa, and is recognised as such by the International Union of Psychological Science (IUPsyS).As per its constitution, PsySSA is committed to the transformation and development of South African psychology to serve the needs and interests of all the people of South Africa, and it aims to advance psychology as a science, a profession and as a means of promoting human wellbeing (PsySSA, 2011). As part of its efforts to achieve these aims, PsySSA is a member of the International Psychology Network for Lesbian, Gay, Bisexual, Transgender and Intersex Issues (IPsyNET). 1IPsyNET is comprised of national, multinational and international psychological associations.These associations are cooperating to increase international collaboration and knowledge amongst practitioners concerned with LGBTI issues, stimulate and apply psychological research and guidelines that address the needs and concerns of LGBTI populations, and increase the number of psychological associations that reject the notion of same-sex sexuality as a mental disorder and promote affirmative mental health practice for LGBTI people (IPsy-NET, 2013).The objective of the PsySSA African LGBTI Human Rights Project is to assist PsySSA in becoming a regional hub to promote capacity and membership of other psychological associations throughout Africa in the work of IPsyNET and to foster active and vocal regional participation in debates around LGBTI issues and concerns (Victor et al., 2014). Context and aim of the position statement South Africa has seen significant socio-legal and policy developments in the protection of the human rights of all people in the country, and respect for diversity and concomitant non-discrimination based on, amongst others, gender and sexual orientation (Republic of South Africa, 1996).These developments have brought about changes at an institutional and disciplinary level with, for instance, the ethical code for health professionals, including a focus on human rights, diversity and non-discrimination within a general do-no-harm framework (Department of Health, 2006). However, the aforementioned developments have not necessarily effected changes at a broader societal level.Sexualities remain heavily influenced by patriarchal systems that privilege heterosexuality (Jackson, 2006).A patriarchal and heteronormative model of gender and sexuality perpetuates unequal power relations between men and women, and entrenches male privilege, contributing to high levels of sexual-and gender-based violence against women in South Africa (Dartnall & Jewkes, 2013).Further to this, such a rigid and oppressive model of gender and sexuality limits the courses of action available to men, in that a normative male identity is associated with expectations of invulnerability and self-reliance, contributing to risky sexual behaviour and low rates of health-seeking behaviour among many South African men (Lynch, Brouard, & Visser, 2010).Public attitudes in South Africa to same-sex sexuality remain overwhelmingly negative, with a nationally representative survey indicating that 84 % of the population say that it is always wrong for two adults of the same sex to have sexual relations (Smith, 2011).Human rights violations and hate crimes against sexual and gender non-conforming minorities are also increasingly being reported (Human Rights Watch, 2011). Current healthcare provision in South Africa is generally based on the assumption of similarity rather than acceptance of diversity (Rispel & Metcalf, 2009).Given the prevalence of discrimination at public health facilities, LGBTI people are less likely to access healthcare in the public sector (Stevens, 2012;Wells & Polders, 2003).In recognition of this, and to assist psychology professionals in South Africa in their related endeavours, an affirmative position statement on sexual and gender diversity, including LGBTI concerns, was developed.This position statement supplements the harm-avoidance approach present in the South African Health Professions Act (Department of Health, 2006) by outlining specific themes for psychology professionals to consider in assuming an affirmative stance. Exploring an affirmative stance The development of a South African position statement that is affirmative of sexual and gender diversity follows similar initiatives by other professional associations.These include: The term 'affirmative psychotherapy' was initially developed in relation to sexual orientation (thus lesbian, gay and bisexual [LGB]), only), and it is therefore firstly discussed in this article from such a position only.Although with different emphases, some common elements in affirmative approaches to LGB sexualities are apparent in the work of a variety of authors, such as Davies (1996), Milton, Coyle, and Legg (2002), and Ritter and Terndrup (2002).These authors concur that an affirmative approach includes that sexual diversity, per se, should not be seen as the cause of psychological difficulties or pathology; the perspective is rather one of recognition of LGB sexualities as normal and natural variances on human sexuality.It is important that the practitioner takes contextual factors into account, in particular how homophobia, heteronormativity, prejudice and stigma influence mental health and wellbeing, and acknowledges the influence of society and significant others on the LGB client.Practitioners also need to be able to empathise with the experiences of LGB clients, including being knowledgeable about LGB sexualities, diversity of identities and experiences within LGB communities, and lifestyles.An affirmative approach implies that practitioners ought to be comfortable in exploring their own sexualities to avoid their potential personal biases affecting their practice.Taking an actively positive view of LGB lives includes assuming that LGB clients have the potential creativity and internal resources to deal with their difficulties and problems (Davies, 1996;Milton et al., 2002;Ritter & Terndrup, 2002). Practitioners need to focus on the way their clients describe themselves, rather than imposing technical language.Practitioners furthermore need to provide a space for clients to explore their possible identities, instead of assuming a particular endpoint.In addition, therapeutic efforts aimed at such a specific endpoint, for instance gender conformity or a heterosexual orientation, are potentially harmful, dangerous and in conflict with medical ethics and should be avoided (Academy of Science of South Africa, 2015).As is later indicated, such an affirmative lens could be applied to all people who walk through a professional's door, and implies a cultivated and ongoing sensitivity to and acceptance of sexual and gender diversity. Establishing the working group The development of the South African position statement serves as a first step in achieving the longer-term goal of the PsySSA African LGBTI Human Rights Project to establish affirmative psychological practice guidelines that may or may not also be relevant elsewhere in Africa.Towards this objective, representatives from across Africa were identified and recruited to attend a pre-congress workshop at the International Congress of Psychology held in Cape Town South Africa in July 2012.The pre-congress workshop, attended by 38 people, provided an ideal platform to bring together experts and interested parties to discuss the possibility of developing affirmative practice guidelines in relation to sexual and gender diversity in Africa (Victor, 2012). The workshop culminated in the establishment of a working group of 24 members, constituted of stakeholders and mental health professionals spanning South Africa, Nigeria, Cameroon, Uganda and Tanzania, tasked with the development of the guidelines.The workshop highlighted several issues that would be important to consider in developing practice guidelines for Africa and prompted discussions regarding the advantages and disadvantages associated with first developing a position statement for South Africa before proceeding to practice guidelines with relevance for the continent (Victor, 2012).The main debates emerging from the workshop are discussed below. The challenges and debates in developing African guidelines Debating the guidelines development process commenced with agreeing that psychology as a discipline is significantly underdeveloped in Africa.On the African continent, psychological wellbeing is often achieved through avenues other than professional services, including traditional healers and clergy (Campbell-Hall et al., 2010).A first challenge thus presented itself: in this context, focusing only on guidelines for the discipline of psychology can be exclusionary as there is a need to consider other healthcare and mental health workers, such as volunteers, traditional healers and related healing systems.Accordingly, it was agreed during the workshop that, whilst this was an initiative from the discipline of psychology, an opportunity presented itself to develop guidelines for professionals within a broader mental health arena.In developing the guidelines, the suggestion was thus that care be taken to ensure that the document reflected this wider target set, both in theory and application. An important conceptual concern raised during the workshop was that the privileging of individual human rights is not universally accepted in all parts of Africa (Academy of Science of South Africa, 2015).In developing the guidelines, inclusive of LGBTI concerns, different regions and countries in Africa would need to be sensitive to whether a human rights perspective would necessarily provide the most acceptable entry point.An alternative stance is that of positioning LGBTI concerns within a mental health and wellbeing framework.It was agreed that such a framework, which emphasises competent healthcare service provision, could be particularly valuable in contexts where same-sex practices remain subject to constitutional and legal discrimination. A further conceptual theme centred on the utility of framing affirmative practice guidelines for the African context in relation to identity politics.Identity politics often rely on self-identified categories of sexual orientation and gender identity to raise consciousness around experiences of oppression related to particular identities (Mertus, 2007).In the United States, identity politics has provided a valuable frame for the development of affirmative practice guidelines in that LGB rights advocates within the APA had to assume an activist role to motivate why, despite homosexuality being declassified as a disorder in 1973, there remained a need for the discipline to formulate an affirmative stance on LGB concerns.The relevance of dominant Western analytical categories when researching African sexualities has however been questioned, and in the African context, the argument has been raised for the fluidity of sexuality and gender, instead of focusing on fixed notions of identity (Epprecht, 2006).Following from this, it was important to broaden the LGB focus when developing guidelines for African contexts to attend to sexual and gender diversity in general. Accordingly, in developing the PsySSA position statement, the aim was to extend such an affirmative stance regarding sexual orientation to represent a wider inclusiveness of sexual and gender diversity with specific reference to LGBTI concerns, i.e. sexual orientation, gender identity and biological variance.This expanded lens also reflects similar global developments such as that of the 5 th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM) (American Psychiatric Association, 2013) moving away from pathologising positions around transgender concerns to a more affirmative approach, as well as the APA's Guidelines for psychological practice with transgender and gender nonconforming people (APA, 2015). Related to the above, participants at the workshop suggested that expanding the view from single-identity politics to multiple dimensions of identities would also bring to the fore the interaction of various forms of oppression, such as those based on race and socio-economic status, which result in different forms of oppression affecting a person in interrelated ways.A lens that is sensitive to intersectionality (see APA, 2015) could not only potentially highlight the way heteronormative and patriarchal contexts have harmful consequences for LGBTI persons but also constrain the courses of action available to all people.Such contexts contribute to stigma, discrimination and victimisation, based on power differentials along varied lines of oppression.Acknowledgement of relevant intersectionalities could potentially avoid the pitfalls of taking an 'othering' stance and, instead, allow for reflection on how psychology professionals could challenge stigma and discrimination informed by unequal systems of sexuality and gender broadly. Another concern raised during the workshop was that South Africa's leading role in the development of the sexual and gender diversity-related affirmative guidelines might be perceived as neo-colonialist and as furthering the aims of an imperialist agenda in other African countries.This concern was predominantly based on the way political leaders in several African countries have at times drawn on a discourse of same-sex sexuality being a 'Western import' and consequently regarded it as 'un-African' , to substantiate a construction of an African identity separate from Western influence (Hoad, 2007).Following from this, guidelines that are affirming same-sex sexuality could potentially be resisted based on being regarded as un-African if their development is perceived as predominantly serving a South African agenda, a country at times associated with Western influence. Sexuality and gender remain under-researched in Africa and there is a dearth of scientific evidence that could be drawn on to support the development of affirmative guidelines relevant to Africa (PsySSA, 2013).Discussions during the workshop reflected the view that, following the developmental path of more than twenty years, which culminated in establishing affirmative guidelines in the United States, was not tenable in the face of urgent concerns in the African context: the window of opportunity to develop guidelines presented itself at the time.This window was evidenced by the International Congress of Psychology held on African soil for the first time in 2012, as well as by the launch of the Pan-African Psychology Union (PAPU) in 2014 (Nel, 2014).PAPU is a professional body that could provide an opportunity for the development of a mutual Africa agenda around gender and sexuality, which is not driven by one country.In addition, establishing an African evidence base had to take into account different perspectives, particularly within indigenous contexts, of what constitutes knowledge and evidence.The group concurred that international research should be used as relevant and that gaps for further research in African contexts had to be identified and appropriate funding mechanisms developed to address such gaps. In summary, reflections from the workshop shaped the focus of the PsySSA African LGBTI Human Rights Project to be cognisant of the broader healing systems drawn on in African contexts.The workshop sensitised the group to the following: • the need for contextual sensitivity to the strategic benefit of a human rights position or a mental health and wellbeing position in advancing the interests of LGBTI persons; • the relevance of framing this work in relation to fluidity in sexual and gender diversity; • the importance of recognising the intersectionality of identities and experiences of discrimination and victimisation; • the need to be sensitive to how Western influence is constructed in some African contexts; and • the need to advance an affirmative view of sexual and gender diversity, while at the same time expanding the African body of knowledge available to inform such work. a position statement for south africa Following the workshop, it became clear that creating practice guidelines for Africa, as a first step in this process, was neither realistic nor desirable.Different countries within Africa have vastly different understandings of human rights and the acceptance of sexual and gender diversity.Ideally, different African regions or countries would therefore need to develop their own guidelines to suit their local contexts.The involvement of a broader range of constituents at the development stage of the guidelines was however deemed critical, as this would ensure increased agreement with and acceptance of the process.In the face of these challenges, lim-ited financial and time resources and recognising the development and support in the discipline already available in South Africa, it was decided to redefine the aim of the project as constructing an affirmative position statement on sexual and gender diversity aimed at psychology professionals in South Africa and developed by PsySSA.The working group elected a core team to prepare this statement (see Acknowledgements for names).A period of intense activity followed between October 2012 and August 2013 with the team developing draft statements, presenting it to the working group and inviting further commentary and feedback from a wider group of stakeholders, mainly through e-mail communication with personal lists and known individuals in sexual and gender diversity work in South Africa.These efforts culminated in the draft statement being presented to the PsySSA Executive Committee and PsySSA Council for ratification -in effect, the highest decisionmaking body of the learned society for psychology in South Africa, thus serving as ethical clearance -and finally launched at the PsySSA Congress in September 2013 (PsySSA, 2013;Victor et al., 2014).In the following, sections of the position statement (PsySSA, 2013, pp. 8-10) are quoted verbatim."Recognising the harm that has been done in the past to individuals and groups by the prejudice against sexual and gender diversity in South African society as well as in the profession of psychology, PsySSA hereby affirms the following.Psychology professionals -1.Respect the human rights of sexually and gender diverse people, and are committed to non-discrimination on the basis of sexuality and gender, including, but not limited to, sexual orientation, gender identity, and biological variance; 2. Subscribe to the notion of individual self-determination, including having the choice of self-disclosure (also known as 'coming out') of sexual orientation, gender diversity, or biological variance; 3. Acknowledge and understand sexual and gender diversity and fluidity, including biological variance; 4. Are aware of the challenges faced by sexually and gender diverse people in negotiating heteronormative, homonormative, cisgendered (see section 'Glossary'), and other potentially harmful contexts; 5. Are sensitised to the effects of multiple and intersecting forms of discrimination against sexually and gender diverse people, which could include discrimination on the basis of gender; sexual orientation; biological variance; socio-economic status, poverty, and unemployment; race, culture, and language; age and life stage; physical, sensory, and cognitive-emotional disabilities; HIV and AIDS; internally and externally displaced people and asylum seekers; geographical differences such as urban/rural dynamics; and religion and spirituality; 6. Have an understanding of stigma, prejudice, discrimination and violence, and the potential detrimental effect of these factors on the mental health and well-being of sexually and gender diverse individuals; 7. Recognise the multiple and fluid sexual and gender developmental pathways of all people from infancy, childhood, and adolescence into adulthood and advanced age; 8. Understand the diversity and complexities of relationships that sexually and gender diverse people have, which include the potential challenges: (a) of sexually and gender diverse parents and their children, including adoption and eligibility assessment; (b) within families of origin and families of choice, such as those faced by parental figures, caregivers, friends, and other people in their support networks, for example, in coming to terms with the diversity, nonconformity, and/or minority status of their sexually and gender diverse significant other; and (c) for people in different relationship configurations, including polyamorous relationships.9. Adhere to an affirmative stance towards sexual and gender diversity in policy development and planning, research and publication, training and education (including curriculum development, assessment, and evaluation of assessment tools), and intervention design and implementation (including psychotherapeutic interventions); 10.Support best practice care in relation to sexually and gender diverse clients by: (a) using relevant international practice guidelines in the absence of South African-specific guidelines; (b) cautioning against interventions aimed at changing a person's sexual orientation or gender expression, such as 'reparative' or conversion therapy; (c) opposing the withholding of best practice gender-affirming surgery and treatment and best practice transgender healthcare as outlined by the WPATH; and (d) encouraging parents to look for alternatives to surgical intervention in the case of intersex infants, unless for pertinent physical health reasons.11.Are, if it be the case, aware of their own cultural, moral, or religious difficulties with a client's sexuality and/or gender identity, in which case they should disclose this to the client and assist her or him in finding an alternative psychology professional should the client so wish; and 12. Are committed to continued professional development regarding sexual and gender diversity, as well as to promoting social awareness of the needs and concerns of sexually and gender diverse individuals, which includes promoting the use of affirmative community and professional resources to facilitate optimal referrals." Issues considered in developing the position statement The process of developing the position statement brought with it renewed consideration of emphasis, contextual sensitivity and the anticipated utility of the document in the South African context.The statement itself outlines various positions in relation to sexual and gender diversity and moves from the general to the specific.It firstly addresses issues of human rights and self-determination, which is followed by - • introducing the idea of diversity and fluidity in sexuality and gender identity; • challenges faced by sexually and gender-diverse people in negotiating heteronormative contexts; • the influence of multiple and intersecting forms of discrimination on sexually and gender-diverse people; • the influence of stigma, prejudice and discrimination on mental health; • the recognition of multiple and fluid sexual and gender developmental pathways of all persons; and • the complexities of relationships within a sexually and gender-diverse context. The final components of the statement deal with assuming an affirmative stance, following best practice care, continued professional development, and the promotion of social awareness around sexual and gender diversity (PsySSA, 2013;Victor et al., 2014). The decision to adopt an affirmative stance in the position statement was made early in the process and it remained a foundation against which the document was checked.As the draft position statement progressed, it was found that the understanding of an affirmative stance developed to include a broader area of sexual and gender diversity.This manifested in the statement by expanding references to 'LGBTI concerns' to refer to 'sexual and gender diversity' instead, in line with the views advanced during the workshop discussion.The decision to adopt this terminology was, firstly, based on an understanding that a broader set of people are facing the potentially negative effect of a heteronormative and homonormative, patriarchal society, which implies a shared struggle.Secondly, the affirmative statement could potentially hold increased utility and relevance for colleagues wanting to develop similar position statements in their respective African countries.Some previous efforts in psychology, internationally, to develop position statements or guidelines on sexuality and gender have at times excluded transgender persons, and intersex concerns have seldom, if ever, featured (see for instance APA, 2011 and the Hong Kong Psychological Society: Division of Clinical Psychology, 2012).Following from this, where the position statement does make mention of LGBTI concerns specifically, it was ensured that gender identity and biological variance were attended to in addition to sexuality. An implication of expanding the focus of the statement beyond LGBTI concerns to attend to sexuality and gender more broadly was the aim to avoid various forms of othering or exclusion.In doing so, it was not assumed that the practitioner is from a non-sexed/non-gender/non-raced/non-classed position, such as is often the case in existing ethical codes.Questions had to be asked around the difficulties faced by practitioners in working in predominantly heteronormative contexts and how the statement could assist them in dealing with societal prejudice related to sexuality and gender.In addition, the focus moved from the individual user only to the individual and his/her significant others and how stigma and discrimination affect them as well. The affirmative stance also had implications for the way language was treated in the document.The group felt it was important to use non-essentialist language as this provided a more open framework that recognises diversity.Essentialist language such as 'normal people' and 'normal preferences' were rephrased, and this also meant that the focus was placed more on affirmation of diversity and fluidity of gender and sexuality and less on specific minority groups.It was however felt that in some instances, it was still needed to include what might be thought of as essentialist terms, such as references to 'lesbian' , 'gay' , 'bisexual' , 'transgender' and 'intersex' as categories of sexuality or gender identity.The reason for inclusion was that these terms would possibly be familiar to many and would also ensure that the positions of minorities and the specific stigma, discrimination and trauma they experience were not erased. Considering the emphasis of an affirmative stance on contextual awareness, a key challenge in developing the position statement was to ensure that it was grounded in a South African body of knowledge.To this end, the small but growing body of work that constitutes South African LGBTI psychology was consulted during this process.Research on LGBTI people's experience with health providers in South Africa that was drawn on included - • studies on gay men's experiences in psychotherapy groups (Nel, Rich, & Joubert, 2007); • the experience of LGB people with psychological therapy and counselling (Victor, 2013;Victor & Nel, 2016); • transgender people's experience with sexual health services (Stevens, 2012); and • perceptions of healthcare providers around sexual orientation and treatment refusal due to sexual orientation (Rich, 2006;Wells, 2005;Wells & Polders, 2003).Local policy and practice guidelines that were consulted included -• healthcare provision for victims of hate crime (Nel, 2007); • guidelines for service providers working with lesbians and gay people (OUT LGBT Well-being, 2007); • guidelines when working with men who have sex with men (MSM) in an HIV/AIDS health service context (Anova Health Institute, 2010); and • indigenous comments on the WPATH's Standards of Care (Gender Dynamix, 2011). The structure and format of the position statement were driven by the practical utility of the document for psychology professionals in South Africa.While this may be different in other contexts, it was felt that information needs to be provided around the topic under discussion in the form of, for example, a comprehensive glossary that accompanies the statement.Knowing that the position statement would be followed by a more comprehensive guidelines document also provided the opportunity to ensure that the initial document outlined the position or view of PsySSA on the topic of sexual and gender diversity in the form of clear, succinct statements, rather than providing detailed practice guidelines.guidelines suited to their unique contexts.It is furthermore trusted that this article will contribute positively to the de-stigmatisation, amongst psychology professionals, of all people with diverse sexual and gender identities, including assisting in developing a sensitivity to the continuous, fluid and lifelong development of sexual and gender identity that can be experienced by a person.acknowledgements This work was primarily supported by the Arcus Foundation, with additional funding support provided by the Multi-Agency Grants Initiative (MAGI) Fund and the University of South Africa.The authors would like to thank the team that assisted in developing the position statement (Ingrid Lynch, Khonzi Mbatha, Carien Lubbe-De Beer, Caretha Laubscher, Diana Breshears, Delene van Dyk, Raymond Nettman, Liesl Theron and Lusajo Kajula), the broader working group (see PsySSA, 2013), the Arcus Foundation, the Humanistisch Instituut voor Ontwikkelingssamenwerking (HIVOS), Clinton Anderson and Ron Schlittler from the APA, Fatima Seedat and the Executive Committee of PsySSA, Saths Cooper and others on the PsySSA Council, as well as all the other individuals and organisations who provided their voice in the creation of the position statement. 1 The American Psychological Association's (APA) Lesbian, Gay, Bisexual, and Transgender Concerns Office (LGBTCO) has served as the secretariat of IPsyNET since its inception in 2001, then known as the International Network for Lesbian, Gay, and Bisexual Concerns and Transgender Issues in Psychology (INET).Through funding provided by the Arcus Foundation, the LGBTCO has been able to provide financing and technical expertise to PsySSA.Limited additional funding support by HIVOS MAGI (Multi-agency Grants Initiative) and the University of South Africa enabled the launch of the PsySSA African LGBTI Human Rights Project in 2011. Statement of the Psychological Association of the Philippines on nondiscrimination based on sexual orientation, gender identity and expression (Psychological Association of the Philippines, 2011); and • the Hong Kong Position paper for psychologists working with lesbians, gays, and bisexual individuals (Hong Kong Psychological Society: Division of Clinical Psychology, 2012).
2018-12-26T23:37:31.446Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "9611ed991d10457a3d4ec69f62c486e0ac66ff4f", "oa_license": "CCBYNC", "oa_url": "http://psychologyinrussia.com/volumes/pdf/2017_2/psych_2_2017_6.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9611ed991d10457a3d4ec69f62c486e0ac66ff4f", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
255772669
pes2o/s2orc
v3-fos-license
Lactobacillus rhamnosus GG ameliorates radiation-induced lung fibrosis via lncRNASNHG17/PTBP1/NICD axis modulation Radiation-induced pulmonary fibrosis (RIPF) is a major side effect experienced for patients with thoracic cancers after radiotherapy. RIPF is poor prognosis and limited therapeutic options available in clinic. Lactobacillus rhamnosus GG (LGG) is advantaged and widely used for health promotion. However. Whether LGG is applicable for prevention of RIPF and relative underlying mechanism is poorly understood. Here, we reported a unique comprehensive analysis of the impact of LGG and its’ derived lncRNA SNHG17 on radiation-induced epithelial–mesenchymal transition (EMT) in vitro and RIPF in vivo. As revealed by high-throughput sequencing, SNHG17 expression was decreased by LGG treatment in A549 cells post radiation and markedly attenuated the radiation-induced EMT progression (p < 0.01). SNHG17 overexpression correlated with poor overall survival in patients with lung cancer. Mechanistically, SNHG17 can stabilize PTBP1 expression through binding to its 3′UTR, whereas the activated PTBP1 can bind with the NICD part of Notch1 to upregulate Notch1 expression and aggravated EMT and lung fibrosis post radiation. However, SNHG17 knockdown inhibited PTBP1 and Notch1 expression and produced the opposite results. Notably, A549 cells treated with LGG also promoted cell apoptosis and increased cell G2/M arrest post radiation. Mice of RIPF treated with LGG decreased SNHG17 expression and attenuated lung fibrosis. Altogether, these data reveal that modulation of radiation-induced EMT and lung fibrosis by treatment with LGG associates with a decrease in SNHG17 expression and the inhibition of SNHG17/PTBP1/Nothch1 axis. Collectively, our results indicate that LGG exerts protective effects in RIPF and SNHG17 holds a potential marker of RIPF recovery in patients with thoracic cancers. Supplementary Information The online version contains supplementary material available at 10.1186/s13062-023-00357-x. Introduction Ionizing radiation (IR)-induced lung fibrosis (RIPF) is a typical damage of the lung following radiotherapy for thoracic cancers [1]. The main characters of RIPF include progressive dyspnea, increasing accumulation of interstitial fluids and eventually the respiratory failure [2]. The incidence of RIPF is up to 20% of patients received radiation therapy [3]. Epithelial-mesenchymal transition (EMT), a morphologic switch from the epithelial polarized phenotype to the mesenchymal fibroblastoid phenotype, plays critical role in progress of RIPF [4]. The underlying mechanisms attributed to the EMT-derived RIPF have yet to be fully illustrated, effective prevention and protective strategies as well as underlying therapy targets also remain unclear. SNHG17 identified as a long noncoding RNA (lncRNA), with lengths of 1186 nucleotides and its protein coding ability lost or restricted [5]. They found SNHG17 is highly overexpression in colorectal cancer, promoted cancer cell proliferation and is an unfavorable prognostic factor [6]. A recent review summarized SNHG17 is a novel cancer-related lncRNA which is highly overexpression in various cancers exerting oncogenic functions [7]. SNHG17 is also an EMT-related lncRNA with the evidence that TGF-beta1 activates SNHG17 expression, promoting cancer cells EMT, leading to the facilitation of esophageal squamous cell growth [8]. SNHG17 can promote lung adenocarcinoma EMT progression through sponging microRNA-193a-5p [9]. These previous studies show that the upregulation of SNHG17 exerts oncogenic functions through regulating EMT process. It may represent a promising therapeutic target in cancer therapy. However, whether the SNHG17 could regulate the RIPF via EMT remains incompletely clear. PTBP1 (polypyrimidine tract-binding protein 1) belongs to the PTB family, known as critical regulator in posttranscriptional gene expression to mediate alternative splicing, translation, stability and localization [10]. Previous study has reported that lncRNA can regulate PTBP1 functions in breast cancer development [11], autophagy process [12] and inflammation activation [13]. It will be interesting to further investigate whether the PTBP1 can be regulated by active probiotics. Lactobacillus rhamnosus GG (LGG), isolated from healthy human intestine, is one of the most widely supplied probiotics for dairy foods including yogurt and prefers beneficial living microorganisms for human health [14]. LGG was reported to not only protect the intestinal barrier, improving the diarrhea symptoms of patients with irritable bowel syndrome [15], but also has prevention and therapy benefits for inhibiting cancer formation [16], improving chemotherapy resistance [17] and protecting against ultraviolet radiationinduced carcinogenesis [18], X-ray-induced mice testis [19] and radiation-induced testis damage [20]. Recently, probiotic strains have been reported to be effective in health by regulating lncRNA expression. A study showed that lncRNASRD5A3 can be modulated by probiotic-prebiotic-synbiotic attenuated nonalcoholic steatohepatitis progression and ameliorated both of fibrosis and hepatic inflammation [21]. Another study showed the ability to slow the progression of non-alcoholic fatty liver disease by modulating lncRNA RPARP AS-1 [22]. These investigation suggest that the functions of LGG on health may be associated through regulation of non-coding RNAs. We hypothesized that in radiation-induced EMT, LGG may decline the expression of SMHG17, inhibiting its oncogenic functions, resulting in the prevention and protective role in inhibiting RIPF. Thus, we aimed to investing whether the SNHG17 expression can be regulated by LGG and the effects of SNHG17 deficiency on the radiation-induced cell response. SNHG17 is a LGG-modulated lncRNA in response to radiation To explore the potential lncRNAs involved into LGG modulation of lung cancer cells post radiation, we analyzed lncRNAs prolife in A549 cells with or without LGG supplement post 6 Gy radiation. Three samples of LGG-negative A549 cell and three samples of LGGpositive cells were collected for the microarray analysis of lncRNAs (Fig. 1a). After screening, 73 lncRNAs expression increased whereas 54 lncRNAs expression decreased in LGG treatment group with compared to non-LGG treatment group post 6 Gy radiation (Fig. 1b). Cluster analysis and Volcano assay illustrated that the amount of increased lncRNAs were more than decreased lncRNAs (Fig. 1c, d). Further Kyoto Encyclopedia of Genes and Genomes (KEGG) bioinformatics assay showed these changed lncRNAs enriched in regulation of circulatory system function, lipid metabolism function, in particular, the regulation of fibrosis and signal transduction (Fig. 1d), indicating these lncRNAs may potential for performing critical role in the biological function after LGG modulation in response to radiation. Among these changed lncRNAs, SNHG17 showed the highest decrease expression level after LGG treatment in A549 cells as compared to the other lncR-NAs (Additional file 1: Table S1). These results indicated that SNHG17 is a LGG-modulated lncRNA and can response to radiation. LGG-modulated lncRNA in response to radiation. a A549 cells were divided into 4 groups, control group (Con), treated with LGG group (LGG), 6 Gy radiation group (IR) and LGG treated prior to 6 Gy radiation (LGG + IR). b Up-and down-regulated numbers of lncRNAs in LGG + IR versus IR. c Heatmap of differentially expressed lncRNAs in LGG + IR versus IR. Red presents upregulated lncRNAs and purple presents down-regulated lncRNAs. d Volcano plot of differentially expressed lncRNAs in LGG + IR versus IR. Red presents upregulated lncRNAs and green presents down-regulated lncRNAs. e KEGG analysis the predicted pathway involved by the differential expressed lncRNA within LGG + IR versus IR. Data are means ± SD (standard deviation). N = 3 independent experiments, Student's two-tailed unpaired t test was used to compare differences between two groups. *p < 0.05 SNHG17 expression increases in lung cancer tissues and serves as a poor prognosis biomarker As previous research studies have found SNHG17 is an oncogenic lncRNA and participant in multiple cancer regulation-associated signaling pathway [5], we focused on exploration its function from LGG modulation perspective. We first conducted bioinformatics assay based on The Cancer Genome Atlas (TCGA) database and online GEPIA database (http:// gepia. cancer-pku. cn/). SNHG17 is frequently upregulated in LUAD (lung adenocarcinoma) and LUSC (lung squamous cell carcinoma) tissues compared with their paired non cancer tissues (Additional file 1: Fig. S1A, B). Furthermore, SNHG17 expression levels were positively correlated with cancer process stage (Additional file 1: Fig. S1A, B). The Kaplan-Meier method and log-rank test assay demonstrated that patients with high SNHG17 expression levels exhibited poor overall survival in patients with LUSC, ACC (Adrenocortical carcinoma), COAD (Colon adenocarcinoma) and LIHC (Liver hepatocellular carcinoma) (Additional file 1: Fig. S2E-H), respectively. We validated the overexpression of SNHG17 in LUAD cohort 1 (cancer tissues = 15, adjacent normal tissues = 15) and LUSC cohort 2 (cancer tissues = 15, adjacent normal tissues = 15) by qRT-PCR and found SNHG17 was upregulated in LUAD and LUSC tissues compared with their paired adjacent normal tissues (Additional file 1: Fig. S1I, J). These results indicated that SNHG17 may potential for lung cancer progress and prognosis. SNHG17 expression is decreases by LGG treatment in lung cancer cells post radiation As of the reported oncogenic property of SNHG17, we further validated that SHNG17 expression increased in A549 and H1299 cancer cells compared with normal lung epithelial cell, HBE and BEAS2B cells (Additional file 1: Fig. S2A). We also found the SNHG17 expression increased post radiation in a dose-and time-dependent manner (Additional file 1: Fig. S2B, C). To examine the effect of LGG on SNHG17 expression, we co-culture A549 cells with LGG and found SNHG17 expression decreased with LGG in a dose-dependent manner (Additional file 1: Fig. S2C). To evaluate whether the effect of decreasing SNHG17 expression is specific to LGG, we compared the modulation ability within viable LGG and ethanol-killed LGG. The result showed only viable LGG efficiently modulated SNHG17 expression (Additional file 1: Fig. S2D). To investigate the combination role of radiation and LGG, we co-culture the A549 cells with or without LGG and 6 Gy radiation. The phenotype of cells is illustrated in Additional file 1: Fig. S2F. SNHG17 expression increased post radiation but decreased after LGG treatment post radiation (Additional file 1: Fig. S2G). IF assay further demonstrated that SNHG17 expression decreased after LGG treatment post radiation (Additional file 1: Fig. S2H). These results indicated that LGG has the ability to decrease SNHG17 expression post radiation. SNHG17 deficiency attenuates the radiation-mediated cell proliferation, apoptosis and G2/M transition The results of LGG-modulated lncRNA post radiation KEGG assay revealed that fibrosis and signaling transduction pathway were the most enriched pathway, indicating that SNHG17 may involve in the radiationmediated biological function. First, we used SNHG17 probe and found SNHG17 translocated from cytoplasm to nuclear after radiation (Fig. 2a). Nucleocytoplasmic separation experiment further confirmed the SNHG17 expression increased in nucleus after radiation (Fig. 2b). Knockdown SNHG17 by siRNA inhibited A549 cancer cells' migration ability whereas overexpression SNHG17 expression enhanced cell migration post radiation (Fig. 2c). Also, knockdown SNHG17 expression promoted cell apoptosis whereas overexpression SNHG17 expression inhibited cell apoptosis at 24 h post radiation (Fig. 2d). Cell cycle detection illustrated that knockdown SNHG17 expression promoted G2/M arrest whereas overexpression SNHG17 expression promoted cell G2/M transition at 2 h post radiation (Fig. 2e). SNHG17 deficiency attenuates the radiation-mediated EMT progress We then analyzed the SNHG17 role in regulation radiation-induced EMT (epithelial-mesenchymal transition) in normal BEAS2B cells. First, post radiation, E-caherine, a key biomarker of epithelial decreased and N-cadherine, a key biomarker of mesenchymal increased in a doseand time-dependent manner (Additional file 1: Fig. S3A, B). While treated with LGG, the E-cadherine expression switched to increase and Vimentin, N-cadherin and a-SMA expression switched to decrease post radiation indicating the protection function of LGG in the cells response to radiation (Additional file 1: Fig. S3C). However, knockdown SNHG17 increased E-cadherine expression but decrease Vimentin expression post radiation in compared without siRNA treatment, whereas overexpression of SNHG17 decreased E-cadherine expression but increase Vimentin expression post radiation (Additional file 1: Fig. S3E). Similarly, the effects of SNHG17 on EMT could be repeated in another normal lung epithelial cell line, HBE (Additional file 1: Fig. S3E). SNHG17 can bind with PTBP1 post radiation directly As of the critical role of SNHG17 in regulation of cells in response to radiation, we next explored the possible molecular mechanism. A549 cells with siSNHG17 were treated with or without 6 Gy and potential binding proteins were detected by liquid chromatography-mass spectrometry (LC/MS) (Fig. 3a). Typically, 69 protein complexes were upregulated and 47 were downregulated (Additional file 1: Table S2). Of these changed proteins, PTBP1 ranks at the top 1 down-regulation protein. A Gene Ontology (GO) assay showed Quantitative analysis of SNHG17 expression in cytoplasm and nucleus prior and post radiation through nucleocytoplasmic separation assay. c Representative photographs of migration ability after knockdown SNHG17 expression in A549 cells with or without 6 Gy treatment through scratch wound healing migration assay. d Representative photographs of cell apoptosis at 24 h after knockdown SNHG17 expression in A549 cells with or without 6 Gy treatment. e Representative photographs of cell cycle after knockdown SNHG17 expression in A549 cells with or without 6 Gy treatment through flow cytometric cell cycle assay. GAPDH served as the cytoplasmic expression control and U6 severed as the nuclear expression control. Error bars represent the SEM of the mean of 3 independent experiments. Data are means ± SD (standard deviation). N = 3 independent experiments, Student's two-tailed unpaired t test was used to compare differences between two groups. *p < 0.05 that these RNA-protein complexes were involved in the regulation of tight junction, ribosome, and TGF-β signaling pathway (Fig. 3b). A KEGG analysis showed that these RNA-protein complexes were enriched in ribosome or TGF-β signaling pathway (Additional file 1: Fig. S3C). Knockdown of SNHG17 expression was positive associated with PTBP1 mRNA expression whereas overexpression SNHG17 was negative associated with PTBP1 mRNA expression (Fig. 3d). can interact with PTBP1 post radiation. a A549 cells were divided into 2 groups, knockdown of SNHG17 group (siSNHG17), treated with IR followed by knockdown of SNHG17 group (siSNHG17-IR). Cells were collected after treatment and send to BiotechPack Seientific Ltd. Beijing, China for LC-MS/MS detection to discover potential SNHG17-interacted proteins. b GO assay of the potential biological function for differential binding proteins. c KEGG assay of the potential biological pathway for differential binding proteins. d Quantitative analysis of PTBP1 mRNA expression in knockdown and overexpression of SNHG17 in A549 cells by qRT-PCR assay. e Representative blots of PTBP1 protein expression in knockdown of SNHG17 in A549 cells post 6 Gy radiation by Western blotting assay. f Representative blots of PTBP1 protein expression in overexpression of SNHG17 in A549 cells post 6 Gy radiation by Western blotting assay. g RNA pull-down analysis of the binding of SNHG17 with PTBP1 in total protein extracted from A549 cells post radiation. Data are means ± SD (standard deviation). n = 3 independent experiments, Student's two-tailed unpaired t test was used to compare differences between two groups. *p < 0.05 Further, PTBT protein expression decreased in cells treated with siSNHG17 but increased in cells treated with overexpression SNHG17 post radiation (Fig. 3e, f ), indicating SNHG17 may positively regulate PTBP1 mRNA and protein expression post radiation. RNA pull-down showed that SNHG17 has a binding ability to the PTBP1 (Fig. 3g). SNHG17 stabilizes the PTBP1 expression by binding to its 3′UTR SNHG17 structural data obtained through RNA structure analysis is presented online (http:// rna. urmc. Roche ster. edu/ RNAst ructu reWeb/) (Fig. 4a). Additionally, through UCSC (University of California Santa Cruz Genome Brower) database screening and bioinformatics binding SNHG17 stabilizes the PTBP1 expression by binding to its 3′UTR. a Structural analysis using an online tool (http:// rna. urmc. Roche ster. edu/ RNAst ructu reWeb/). b Bioinformatic information analysis predicted SNHG17 and PTBP1 binding sites in SNHG17 sequence. The transcript activity of PTBP1 was measured in A549 cells after knockdown of SNHG17 or overexpression of SNHG17. c Immunoblot assay of PTBP1 and β-actin in the RNA pulldown extract with biotin-labeled full-length lncRNA SNHG17 by RT-PCR. Biotin-anti-sense lncRNA SNHG17 sequences were used as negative control. d PTBP1 protein expression at indicated radiation dosage and timepoints of radiation by Western blotting. e Representative blots of E-cadherin and N-cadherin protein expression in knockdown of SNHG17 or knockdown of PTBP1 in A549 cells post 6 Gy radiation by Western blotting assay. f Representative blots of E-cadherin and N-cadherin protein expression in overexpression of SNHG17 or overexpression of PTBP1 in A549 cells post 6 Gy radiation by Western blotting assay. Data are means ± SD (standard deviation). n = 3 independent experiments, Student's two-tailed unpaired t test was used to compare differences between two groups. *p < 0.05 sequence to the 3′UTR of PTBP1 mRNA (Fig. 4b). The dual-luciferase reporter gene assay showed that SNHG17 can regulate the luciferase activity of the PTBP1 3′UTR (Fig. 4b). The RIP assay further confirmed that SNHG17 binds to the 3′UTR of PTBP1 mRNA in A549 cells post radiation (Fig. 4c). PTBP1 expression increased post radiation in a time-and dose-dependent manner (Fig. 4d). Knockdown of both of SNHG17 and PTBP1 increased E-cadherin expression and decreased N-cadherin expression than that of only knockdown of SNHG17 expression post radiation (Fig. 4e), while overexpression of both of SNHG17 and PTBP1 expression provided the opposite result in A549 cells post radiation (Fig. 4f ), indicating SNHG17 can stabilize PTBP1 expression through bind to the 3′UTR. PTBP1 combines with Notch1 post radiation To identify the possible function of PTBP1 on A549 cells regulated by SNHG17, Hitpredict and GenMaNIA databases were used to identify the potential proteins with PTBP1. Here, we selected Notch1 because it was reported a key pathway involving in the EMT progress and also the predicted protein interacted with PTBP1 in both of two databases. Immunoprecipitation analysis indicated that PTBP1 could bind to Notch1 in A549 cells (Fig. 5a). Immunofluorescence indicated the co-localization within PTBP1 and Notch1 post radiation (Fig. 5b), while this interaction attenuated in cells treated with siSNHG17 post radiation, indicating the interaction within PTBP1 and Notch1 can be regulated by SNHG17 post radiation (Fig. 5c). Western blotting showed that Notch1 expression can be regulated by SNHG17 and PTBP1 (Fig. 5d). As of cicloheximide (CHX) can inhibit protein synthesis, we next analyzed the effects of PTBP1 on Notch1 protein stability in A549 cells. The results showed that PTBP1 could inhibit the degradation of the Notch1 protein (Fig. 5e). As of ubiquitination is an important type of protein degradation, we next detected ubiquitination of PTBP1 in A549 cells with or without siSNHG17. The results showed knockdown of SNHG17 could not ubiquitinate Notch1 protein (Fig. 5f ). Considering NICD is the Notch1's transcription activity protein fraction with function of regulation of EMT-related genes expression, we than conducted RIP assay and the results showed PTBP1 can bind to NICD (Fig. 5g). Overexpression of SNHG17 can increase NICD and PTBP1 expression in nucleus (Fig. 5h). Western blotting showed knockdown PTBP1 expression decreased NICD expression post radiation (Fig. 5i) whereas rescue assay further confirmed the role of PTBP1 in regulation of NICD (Fig. 5j). These results indicated that PTBP1 can bind with NICD and SNHG17 involved into regulation PTBP1-NICD interaction. LGG improves the radiation-induced cell proliferation and EMT Since LGG modulated the decrease of SNHG17 expression in lung cancer cells post radiation, we then explore the effects of LGG on radiation-induced cell proliferation and EMT. A549 cells were divided into con, radiation and LGG treatment prior radiation groups (Fig. 6a). Compared with control group, LGG inhibited cell survival post radiation in A549 and H1299 cells (Fig. 6b, c). A549 cells treated with LGG also promoted cell apoptosis and increased cell G2/M arrest post radiation (Fig. 6d, e). Western blotting showed that LGG treatment increased E-cadherin expression and decreased VImentin, N-cadherin expression in BEAS2B cells post radiation (Fig. 6f ) showing the protection of LGG on cells in response to radiation insults. These results indicate that LGG improves the radiation-induced cell proliferation and EMT. LGG-modulated SNHG17 deficiency improves the radiation-induced EMT in vivo To investigate the SNHG17 role in vivo, we further evaluated the effect of LGG and SNHG17 silencing on radiation-induced lung EMT progression in vivo. Mice were divided into 5 groups as shown in Fig. 7a. A mouse model of radiation-induced lung fibrosis was established via a single dose of 10 Gy to the whole thorax. SNHG17 mRNA expression increased post radiation in lung tissues but decreased in lung tissues in mice treated prior with LGG (Fig. 7b). HE staining showed at 12 weeks after 10 Gy radiation, the lung tissue damaged with thickened alveolar septa, interstitial oedema and infiltrated inflammatory cell were obviously mitigated, but in LGG and siSNHG17 treated mice, these damage was attenuated and the lung alveolar integrity was better than that in radiation-induced lung fibrosis group (Fig. 7c). TEM showed that the mitochondrion structure was damaged but restored after LGG and siSNHG17 treatment (Fig. 7d). Western blotting showed LGG increased E-cadherin expression and decreased N-cadherin expression in mice lung tissues post radiation (Fig. 7e). Also, LGG decreased NICD and PTBP1 expression in mice lung tissues post radiation (Fig. 7f ). SNHG17 deficiency decreased NICD and PTBP1 expression in mice lung tissues post radiation (Fig. 7g). These results indicate that LGG may act as protection role in inhibiting radiationinduced EMT in vivo. Discussion The RIPF is a severe side effect in patients with thoracic associated cancer after radiotherapy. Future research should be paid more attention to explore the prevention or protective targets and strategies of RIPF. Recent studies have shown that epigenetic modification including lncRNAs are thought to be critical in the interaction between the radiation exposure and damage [23]. Furthermore, the probiotics have been identified to prevent the damage of radiation [24]. Both lncRNAs and probiotics have important health and disease roles. As a b Co-localization of PTBP1 and Notch1 protein in A549 cells post radiation was determined by immunofluorescence. Scale bars = 50 μm. c Co-localization of PTBP1 and Notch1 protein in A549 cells with or without knockdown SNHG17 expression post radiation was determined by immunofluorescence. Scale bars = 50 μm. d The Notch1 protein expression was measured after overexpression SNHG17 or PTBP1, or after knockdown of SNHG17 or PTBP1 was determined by Western blotting. e Representative images of the Western blotting results of the Notch1 protein in si NC and si PTBP1-transfected A549 cells after indicated CHX(cicloheximide) treated timepoints (0, 0.5, 1, 2, 4, 8 h). f Immunoprecipitation analysis of ubiquitinated Notch1 in si NC and si SNHG17-transfected A549 cells pretreated with the proteasome inhibitor MG132. g Immunoprecipitation analysis of PTBP1 and NICD interaction. h The nuclear NICD and PTBP1 expression was examined in A549 cells after overexpression of SNHG17 by western blotting. i Representative blots of NICD protein expression in knockdown of PTBP1 in A549 cells post 6 Gy radiation by Western blotting assay. j Rescue assay was shown NICD protein expression in overexpression of PTBP1 following knockdown of PTBP1 in A549 cells post 6 Gy radiation by Western blotting assay. Data are means ± SD (standard deviation). n = 3 independent experiments, Student's two-tailed unpaired t test was used to compare differences between two groups. *p < 0.05 result, study on how probiotics regulate lncRNAs would provide new insight for controlling the radiation-induced damage which could be used as a new approach to prevention and therapy of RIPF. Our study revealed that SNHG17, identified by LGG treated lung cancer cells post radiation through RNA-seq analysis, is associated with RIPF and promote EMT progression. We focused on the LGG-derived lncRNA because of the widely used of Fig. 6 LGG improves radiation-induced lung EMT. a BEAS2B cells were divided into 3 groups, Control group (Con), 6 Gy radiation group (IR), treated with IR followed by LGG treatment group (LGG + IR). b Cell survival rate in Con and LGG groups post indicated dosage of radiation in A549 cells. c Cell survival rate in Con and LGG groups post indicated dosage of radiation in H1299 cells. d Representative photographs of cell apoptosis after LGG-treated in BEAS2B cells with or without 6 Gy treatment through flow cytometric assay. e Representative photographs of cell cycle after LGG-treated in BEAS2B cells with or without 6 Gy treatment through flow cytometric assay. f Representative blots of E-cadherin, VImentin and N-cadherin protein expression in LGG treated BEAs2B cells post 6 Gy radiation by Western blotting assay. Data are means ± SD (standard deviation). n = 3 independent experiments, Student's two-tailed unpaired t test was used to compare differences between two groups. *p < 0.05 LGG but limited associated molecular mechanism illustration in epigenetic aspect, in particular, the mechanism of the protection role of LGG in lung cancer patients with radiation-induced fibrosis through regulation of lncRNA. Upon to LGG, it is typically a widely used and characterized probiotic train in health promotion. Recently, probiotics including LGG has been used for improving cancer chemo-and radiotherapy, to relieve adverse symptoms or increase quality of life [25]. Evidence has shown that the health effects of probiotics are associated with specific genes and even individual nucleotides, but the relationship between epigenetic regulation of probiotics and health remains poorly understood. Demont et al. found both live and heat-treated probiotics can modulate LGG and siSNHG17 improve radiation-induced lung fibrosis. a Male C57BL/6 mice were divided into five groups: control group (Con), radiation-induced lung fibrosis group with 10 Gy radiation treated once time (10 Gy), mice treated with LGG prior to received 10 Gy radiation on lung (10 Gy + LGG), mice treated with siSNHG17 through tail vein injection prior to received 10 Gy radiation on lung (10 Gy + siSNHG17), mice treated with LGG and siSNHG17 prior to received 10 Gy radiation on lung (10 Gy + LGG + siSNHG17). The mice were killed at the 7th weeks to collect tissues for further detection of various parameters. b SNHG17 expression in lung tissues detected by qRT-PCR. c Representative images of hematoxylin and eosin-stained metastatic lung tissues. Scale bar = 50 μm. d Representative photographs of lung tissues by TEM. e Representative blots E-cadherin, and N-cadherin protein expression in lung tissues with or without LGG treatment post 10 Gy radiation by Western blotting assay. f Representative blots NICD, and PTBP1 protein expression in lung tissues with or without LGG treatment post 10 Gy radiation by Western blotting assay. g Representative blots NICD, and PTBP1 protein expression in lung tissues with or without knockdown of SNHG17 post 10 Gy radiation by Western blotting assay. Data are means ± SD (standard deviation). n = 3 independent experiments, Student's two-tailed unpaired t test was used to compare differences between two groups. *p < 0.05 miRNA expression [26]. Xing et al. found that Bacillus coagulans R11 can decrease intestinal injury induced by lead exposure through affecting the faecal miRNA functions [27]. Chen et al. found that Lactobacillus plantarum Z01 can reduce Salmonella Typhimurium-induced cecal inflammation through regulating miRNA expression [28]. A recent review showed the crosstalk between miRNA and probiotics may influence the development of inflammatory bowel disease pathogenesis and therapeutics [29]. Hany et al. showed probiotics can slow the NAFLD progression through regulation of lncRNA RPARP AS-1 [22], whereas Gadallah et al. showed probiotics can slow the progression of NAFLD through regulation of lncR-NASRD5A3-AS1 [21]. Since miRNAs and lncRNAs dysregulation plays an important role in the pathogenesis of many different diseases, these reports indicate that the mechanism of probiotics effects on disease therapy may be associated with epigenetic regulation. However, most of previous reports focused on the effects of probiotics on miRNAs instead of the effect of signal probiotics strain. Moreover, the role of LGG-lncRNA interactions in radiotherapy has limited reports. The novelty of our finding showed that LGG can modulate lncRNA expression in cancer cells. This provides a new avenue that probiotics involve into epigenetic regulation and a deep understanding of probiotics-lncRNA cross-talk for further better application in cancer therapy. Upon to SNHG17, it has been identified as a novel long-coding RNA, referring to oncogenic function in various cancers [7]. SNHG17 has high expression in many cancer tissues and can be induced by many factors including m6A methyltransferase METTL3 [30]. The molecular mechanisms of SNHG17 mainly rely on the ceRNA or interact with downstream proteins directly, resulting in promotion of cancer proliferation, growth or migration or serving as poor prognosis predictor [31,32]. In our results, we found SNHG17 can be modulated by LGG. Further study showed SNHG17 expression increased post radiation but decreased following LGG treatment prior radiation. Moreover, we found SNHG17 overexpression can promote radiation-induced EMT. These results are consistent with previous reports that SNHG17 is an oncogenic molecular and our data provide more information regarding the role of SNHG17 in cancer cells response to radiation. Most importantly, we found SNHG17 can bind the 3′UTR of PTBP1 to stabilize its expression to activate Notch1 expression. Upon to PTBP1, it has been reported as a splicing suppressor, competing with spliceosome for RNA binding to inhibit alternative splicing [11]. PTBP1 has also been reported to promote EMT progression in breast cancer [33] and in gastric cancer [13]. Miao et al. found that lncRNA MALAT1 can stabilize the interaction between PTBP1 and other proteins to affect the alternative splicing events [34]. Another study indicated that lncRNA LINREP can promote glioblastoma progression through recruiting the formation of PTBP1/HUR complex [35]. Notch1 is a key transcription factor in regulation of lung fibrosis [36]. The inhibition role of LGG on decreasing SNHG17 expression may inactivate PTBP1 and Notch1 expression leading to the attenuation of RIPF. In general, our study reveals the role of LGG/SNHG17/PTBP1 in attenuating lung cancer cell proliferation and EMT progression, and suggests that SNHG17 may serve as a novel therapeutic target for RIPF. Upon to the prevention and therapy of EMT and lung fibrosis, Lactobacillus was reported to attenuate the pancreatic cancer EMT progression [37] and enhanced the 5-FU anticancer activity in colorectal cancer [38]. Our results further found that LGG can inhibit radiationinduced EMT and RIPF, providing the new potential application of LGG in prevention or therapy of RIPF in clinic. In summary, SNHG17 was markedly deregulated in LGG treated lung cancer A549 cells in response to radiation. Moreover, SNHG17 is an lncRNA that promotes radiation-induced EMT and lung fibrosis through stabilizing PTBP1 expression and activating Notch1 expression. Our results demonstrate that SNHG17 is not only a potential biomarker for early diagnosis and treatment of radiation-induced EMT and RIPF, but also provides baseline understanding of how the contributing effect of LGG on radiation-induced EMT and RIPF through epigenetic regulation mechanism. Downregualtion of SNHG17 by LGG appears to potential prevention target and explains at least one of missing links among probiotics, epigenetic regulation and cancer therapy. Cell lines and irradiation condition Human non-small cell lung cancer A549 cell line, human lung epithelial HBE cell line and BEAS2B cell, human renal epithelial 293T cells were stored in our laboratory and authenticated by ATCC (American Type Culture Collection, USA). A549, BEAS2B and 293T cells were maintained in DMEM (HyClone) with 10% fetal bovine serum (Gibco, Scoresby, Victoria, Australia). In addition, 1% penicillin streptomycin-glutamine was added to the medium, and the cells were maintained at 37 °C in a humidified incubator containing 5% CO 2 . Cell transfection The cells were passaged the day before transfection. After the cells were grown to 60% density, siRNA (the sequence are listed in Additional file 1: Table S1) was conducted through the transient transfection using Lipofectamine 2000 (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer's instructions. Scrambled siRNA was used as the negative control. Forty-eight hours after transfection, the cells were collected for further experiments. Cell proliferation analysis For the cell proliferation analysis, cells were collected at passage 3-4, and inoculated in 96-well plates at a density of 3 × 10 3 cells/well. The effects of LGG on cell viability were detected using a standard cell counting kit-8 (CCK-8) according to the manufacturer's instructions (CK04, DOJINDO, Japan). The optical density (OD) of the cells in each group was tested by measuring absorbance at 570 nm using a microplate reader. Establishment of co-culture of cells and LGG Prepared a concentration of 3.0 × 10 6 cells/ml cell suspension in the six well cell culture plate, discard the cell culture medium, add PBS solution for three times, and then put the 0.4 µM Transwell in the well of the culture plate. Then each well was divided into the upper chamber and the lower chamber. The upper chamber is the upper chamber, and the lower chamber is the inner chamber of the culture plate. Add new cell culture medium in the lower chamber. It is appropriate to just contact the basement membrane of the Transwell cell, which is about 2 ml. Add LGG bacteria diluted with the cell culture medium in the upper chamber, with a final concentration of 10 12 CFU/ml at 37 ℃, 5% CO 2 , and co culture for 4 h, 12 and 24 h, respectively. Then removed Transwell cell, collected the bacterial suspension, centrifuge at 12,000 R/min for 10 min, discard the supernatant, separate the bacteria, add PBS solution for three times, calculated the final concentration of the bacterial solution. Mice experiments design Male C57BL/6 mice were purchased at 8 weeks of age from the Hunan SJA Laboratory Animal Co., LTD, China. All of the mice were housed at the Animal Laboratory Division, Xiangya School of Public Health, Central South University, China. All animal procedures and testing were conducted according to the National Legislation and local guidelines of the Laboratory Animals Center at the Central South University. The study and research protocols were approved by the Institutional Animal Review Board of Central South University (2020sydw0110). In addition, all animals in the study were treated humanely with regard to the alleviation of suffering. All the mice were maintained in a specific-pathogen-free (SPF) environment with controlled conditions of a 12 h light/dark cycle at 20-22 °C and 45 ± 5% humidity. After 1 week of acclimation, the mice were used for the study with the group design detailed below. To investigate the effects of LGG or SNHG17 on radiation-induced EMT progress, mice were divided into five groups: control group (Con), radiation-induced lung fibrosis group with 10 Gy radiation treated once time (10 Gy), mice treated with LGG prior to received 10 Gy radiation on lung (10 Gy + LGG), mice treated with siSNHG17 through tail vein injection prior to received 10 Gy radiation on lung (10 Gy + siSNHG17), mice treated with LGG and siSNHG17 prior to received 10 Gy radiation on lung (10 Gy + LGG + siSNHG17). The LGG was administrated for 7 days prior to radiation via water, and the concentration of bacterial suspension was 4.5 × 10 9 CFU/ml per mouse. Each group was enrolled 10 mice. Mice were subjected to 60 Co γ-rays irradiation at 10 Gy on the lung part and the other parts of mice were shielded with 10 cm thick lead bricks (dose rate: 200 cGy/ min, source-skin-distance: 100 cm, voltage: 180 kV, current: 12.5 mA, the field size: 3 cm × 40 cm) at room temperature at the Institute of Radiation Medicine, Academy of Military Medical Sciences (Beijing, China) [40,43]. Body weight, food intake and water consumption were recorded. The mice were killed at the 7th weeks to collect tissues for further detection of various parameters. RNA isolation and real-time PCR Total RNA was extracted from cells by using a Total RNA extraction kit (Vazyme, China) according to the manufacturer's instructions. After the quality and quantity of the extracted RNA were confirmed by a nucleic acid quantitative detector (NanoDrop 2000c, USA), complementary DNA (cDNA) was synthesized using HiScript III RT SuperMix for qPCR (+ gDNA wiper) (Vazyme, China) according to the manufacturer's instructions. The Taq Pro Universal SYBR qPCR Master Mix (Vazyme, China) was used for real-time PCR analysis on a PCR platform (Bio-Rad CFX96 Touch, USA) to determine the expression level of lncRNA SNHG17. Then, the relative expression of lncRNA SNHG17 was calculated by the 2 −ΔΔCT value method, and GAPDH was used as a housekeeping gene. The specific primers for lncRNA SNHG17, PTBP1 and GAPDH used for RT-PCR are listed in Additional file 1: Table S3. All the primers were all synthesized by Sangon (Shanghai, China). Each PCR amplification was performed in triplicate to verify the results. Primers for PCR is listed in Additional file 1: Table S3. Western blotting Cells in the logarithmic growth phase were placed in a 35 mm dish at the appropriate density and cultured in an incubator. Proteins were extracted from irradiated cells by using M-PER Mammalian Protein Extraction Reagent (Thermo Fisher Scientific, Taiwan, China) according to the manufacturer's instructions. Equal amounts of proteins were separated on a 10% sodium dodecyl sulfatepolyacrylamide gel electrophoresis gel and transferred to nitrocellulose membranes (Millipore, USA). 5% skimmed milk was used to block the membranes for 1 h, and then, the membranes were probed overnight at 4 °C with the primary antibodies listed in Additional file 1: Table S3. Next, the membranes were incubated with specific secondary antibodies (ZSGB-BIO, Beijing, China) for 1 h at room temperature. Peroxidase labeling was visualized via enhanced chemiluminescence labeling using an ECL Western blotting detection system (Thermo Fisher Scientific, Waltham).The details of antibodies used in this study are listed in Additional file 1: Table S4. Fluorescence in situ hybridization (FISH) Cy3-labeled lncRNA SNHG17 probes were designed and synthesized by RiboBio (Guangzhou, China). The probe signals were determined with a Fluorescent In situ Hybridization Kit (RiboBio, Guangzhou, China) following the manufacturer's guidelines. Images were taken under an immunofluorescence confocal microscope (Crest Optics X-Ligt V3, Italia). Immunofluorescence staining Cells plated on 22 × 22 mm 2 cover slips in 6-well plates were irradiated and fixed in 4% paraformaldehyde for 30 min at room temperature, permeabilized in 0.25% Triton X-100 buffer for 30 min and then blocked in 3% BSA for 30 min at room temperature. Then, the cells were incubated with a PTBP1 monoclonal antibody (Sino Biological, Beijing, China) and Notch1 antibody (Santa Cruz) antibody at 4 °C overnight and washed twice with PBS. Subsequently, the cells were incubated with a FITClabeled anti-mouse antibody (Invitroge, USA) and a Texas Red-labeled anti-rabbit antibody (Invitroge, USA) at room temperature for 2 h. The slides were finally fixed with a fluorescent sealer containing DAPI (ZSGB-BIO, Beijing, China). Images were obtained under a confocal microscope (Crest Optics X-Ligt V3, Italia) with the NIS-Elements Viewer 4.20 capture system. We observed 3 high-power visual fields (100× oil lens) were randomly selected from each slice to observe the confocal. RNA immunoprecipitation (RIP) Cells were used to perform RNA immunoprecipitation (RIP) experiments using the Magna RIP ™ RNA-Binding Protein Immunoprecipitation Kit (Millipore, Bedford, MA) according to the manufacturer's instructions. Cells were spread in a 60 mm dish at the appropriate density and were irradiated after they had grown to the appropriate amount. Then, the cells were rinsed with PBS and centrifuged at 1500 rpm for 5 min at 4 °C, and the supernatant was discarded. Next, the cells were re-suspended in 100 µl of RIP lysis buffer and pipetted until homogeneous on ice for 5 min. Magnetic beads were washed with RIP wash buffer and incubated with 5 µg of the anti-IgG antibody (Millipore, Bedford, MA), anti-PTBP1 antibody (Thermo Fisher, USA) for 2 h at room temperature in 100 µl of RIP wash buffer. After a brief centrifugation, the supernatant was discarded, and the unbound protein antibodies on the magnetic beads were washed away with RIP wash buffer. Then, we centrifuged the cell lysate at 4 °C and 12,000 rpm for 10 min and collected 100 µl of the supernatant. The supernatant was incubated with 900 µl of IP buffer containing magnetic beads conjugated with different antibodies at 4 °C overnight. Before incubation, 10 µl of the sample buffer was removed and marked as the input for later Western blot experiments. The sample buffer was then washed with RIP wash buffer and was used in subsequent Western blot experiments for verification after heat denaturation. At the same time, 150 µl of Proteinase K buffer was added to the sample buffer to dissolve protein. Then, immunoprecipitated RNA was isolated, and co-precipitated RNAs were detected by qRT-PCR. RNA pulldown assay The cDNA sequence of SNHG17 and different fragments were cloned into pCDNA3.1 (+). Biotin-labeled RNAs were transcribed in vitro using a biotin-labeling mix and T7 polymerase in the linearized pCDNA3.1 (+) plasmid following the manufacturer's instructions (Large Scale RNA ProductionSystem-T7, Promega). For the RNA pulldown assay, cells were treated with the RNA 3′-End Desthiobiotinylation Kit and Pierce ™ Magnetic RNA-Protein Pull-Down Kit (Thermo Fisher, USA). Cells were rinsed with PBS and then re-suspended in 1 ml ice-cold PBS. Then, we centrifuged the suspension at 4 °C and 1000 rpm for 3 min. Next, the cell pellet was suspended in 400 µl of dilution buffer (with a protease inhibitor cocktail) and centrifuged at 4 °C and 12,000 rpm for 10 min. The cell supernatant was collected for use in the next experiment. Pierce nucleic acid compatible streptavidin magnetic beads were washed twice with wash buffer to remove the stock solution and were re-suspended in RNA capture buffer. We added labeled biotin-lncRNA SNHG17 to the beads and incubated them for 15-30 min. Then, the beads were washed twice with wash buffer and re-suspended in Protein-RNA Binding Buffer. Next, we constructed the RNA pulldown reaction system according to the manufacturer's instructions and incubated the beads at 4 °C for 2 h in a rotary shaker. Finally, we eluted the proteins with 50 µl biotin elution buffer after washing the beads with wash buffer and detected the proteins by SDS-PAGE and mass spectrometry analysis. Cell-cycle analysis PI/RNase Staining Solution kit (CY2001-P, Sungene Biotech, China) was used. The cells were seeded into 35 mm culture dishes at a density of 70-80% per dish. The cells were transfected or not transfected with siRNA, subjected to irradiation 12 h after the pretreatment, and harvested at the indicated timepoints (0, 2, 4, 6, 8, or 12 h) after irradiation. After the medium was removed, the cells were treated with RNase A (62 µg/ml) and incubated at 37 °C for 30 min. The cells were stained with propidium iodide (PI) solution, and the cell-cycle distribution was analyzed by flow cytometry (Agilent NovoCyte, USA). G2/M assay was based on the cell cycle detection. Apoptosis assay Fluorescein Isothiocyanate (FITC)-Annexin V Apoptosis Detection Kit was used to detect cell apoptosis according to the manufacturer's instructions (BD Pharmingen, San Diego, CA, USA). Briefly, we collected the cell culture supernatant 24 h after irradiation. After digesting the cells with trypsin (without EDTA), the cells were centrifuged. The supernatant was collected and washed three times with PBS. Then, we re-suspended the cells in binding buffer at a concentration of approximately 5 × 10 5 cells/ml and added FITC-conjugated Annexin V and propidium iodide (PI) solution according to the manufacturer's instructions. The cells were incubated at room temperature for 15 min in the dark. Then, the cells were subjected to flow cytometry analysis (Agilent NovoCyte, USA). Scratch wound healing migration Cells were seeded into 6-well plates and grown to approximately 90% confluence. Cell monolayers were scratched with a 20-µl sterile pipette tip. Cells were rinsed with phosphate-buffered saline and cultured in DMEM supplemented with 1% fetal bovine serum. Cell migration was photographed 0 and 48 h after scratching using an inverted microscope (Olympus, Tokyo, Japan). Fluorescence in situ hybridization (FISH) Cy3-labeled lncRNA SNHG17 probes were designed and synthesized by RiboBio (Guangzhou, China). The probe signals were determined with a Fluorescent In situ Hybridization Kit (RiboBio, Guangzhou, China) following the manufacturer's guidelines. Images were taken under an immunofluorescence confocal microscope (Crest Optics X-Ligt V3, Italia). Microarray analysis of lncRNAs The lncRNA profile between irradiated A549 MLE-12 cells with or without LGG treatment post 6 Gy radiation was performed at oeBiotech Biotechnology Corporation (Shanghai, China). Agilent lncRNA microarray (Agilent Technologies, USA) was used in the analysis. According to the manufacturer's protocol, lncRNAs were labeled and hybridized with lncRNA complete Labeling and Hybridization kit. Data normalization and processing were performed using Quantile algorithm, Gene Spring Software 12.6 (Agilent Technologies, USA). The differential expression of lncRNAs was performed via the Pearson's correlation analysis with Cluster 3.0 and TreeView software, and the differentially expressed genes (DEGs) were identified to have at least |logFC| > 2, p value < 5% in expression. Electron microscopy for structural analysis of the lung TEM analysis was performed after collection of the lung tissues by Shiyanjia Lab (www. shiya njia. com). The tissues were split and treated in a cold fixative solution composed of 2.5% glutaraldehyde at 4 °C for 4 h. After washing with PBS, the specimens were post-fixed in 1% OsO4 at 4 °C for 1 h and washed again with PBS. A graded series of ethanol solutions was used for further dehydration, and the specimens were transferred to be incubated. TEM was performed with a JEM-2100 F at 80 kV, and images were acquired using a side-inserted BioScan camera. Online available databases SNHG expression in various cancers was evaluated using the TCGA database (https:// cistr ome. shiny apps. io/ timer/). The GEPIA database (http:// gepia. cancer-pku. cn/ index. html) was used to analyze RNA sequencing data from normal and tumor tissue samples from the TCGA and GTEx projects. We also used the GEPIA website to generate overall free survival rates. UCSC (http:// genome. ucsc. edu/) and ALGGEN (http:// alggen. lsi. upc. es/) databases were used to obtain the potential transcriptional binding sites in the promoter of genes. GeneMainia (http:// genem ania. org/) and Hitpredict (http:// www. hitpr edict. org/) databases was used to predict the potential interaction proteins with SNHG17. Statistical analysis All experiments were performed with at least three independent experiments. In general, Student's two-tailed unpaired t test was used to compare differences between two groups. One-way analysis of variance followed by the Newman-Keuls multiple comparison test were used to compare more than two groups. All data are expressed as the means ± standard deviation (SD) for each experiment. A p value of < 0.05 was considered to indicate a statistically significant result. GraphPad Prism 6 Software (GraphPad Software Inc., La Jolla, CA) was utilized for all statistical analyses and construction of graphs.
2023-01-14T05:14:07.724Z
2023-01-12T00:00:00.000
{ "year": 2023, "sha1": "30e3ec13adfa7a8bf1970afdb66f4527b737f4a5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "30e3ec13adfa7a8bf1970afdb66f4527b737f4a5", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
128359708
pes2o/s2orc
v3-fos-license
Al atomistic surface modulation on colloidal gradient quantum dots for high-brightness and stable light-emitting devices Quantum-dot (QD) light-emitting devices (QLEDs) have been attracting considerable attention owing to the unique properties of process, which can control the emission wavelength by controlling the particle size, narrow emission bandwidth, and high brightness. Although there have been rapid advances in terms of luminance and efficiency improvements, the long-term device stability is limited by the low chemical stability and photostability of the QDs against moisture and air. In this study, we report a simple method, which can for enhance the long-term stability of QLEDs against oxidation by inserting Al into the shells of CdSe/ZnS QDs. The Al coated on the ZnS shell of QDs act as a protective layer with Al2O3 owing to photo-oxidation, which can prevents the photodegradation of QD with prolonged irradiation and stabilize the device during a long-term operation. The QLEDs fabricated using CdSe/ZnS/Al QDs exhibited a maximum luminance of 57,580 cd/m2 and current efficiency of 5.8 cd/A, which are significantly more than 1.6 times greater than that of CdSe/ZnS QDs. Moreover, the lifetimes of the CdSe/ZnS/Al-QD-based QLEDs were significantly improved owing to the self-passivation at the QD surfaces. www.nature.com/scientificreports www.nature.com/scientificreports/ by a considerable QY reduction 16 . In addition, silica-coated QDs are usually powder-type materials and are not well-suited for solution-process-type optoelectronics devices. Recently, the groups of Yang and Li proposed a method for doping ZnS shells with Al for QD production that is solution processable and effective self-passivation 17,18 . These studies showed that, upon light irradiation, the Al doped in the ZnS shell became photo-oxidized into Al 2 O 3 , which acted as a protective layer and prevented photodegradation of the QDs upon a prolonged irradiation. Although these studies reported a substantial enhancement in the photostability, the complex fabrication process of the individual core and shell, as well as the Al doping, required a long fabrication time. Furthermore, the fabrication of the individual core/shell structure induced a lattice mismatch between the core and shells. Consequently, the highest achievable QY from the as-obtained QDs was smaller than 80%. Therefore, a short and simple synthesis method for QDs, which can improve their photostability and QY, is necessary. In this study, to improve the QD stability and device performance, we induced Al atomic passivation on the QD surface, and demonstrated its application in a solution-processable QD LED (QLED). To the best of our knowledge, no studies have been reported on self-passivation (e.g., Cd-based core/shell QDs with Al passivated QDs in QLEDs (Al-QLEDs)). Two types of QLEDs were fabricated using CdSe/ZnS and CdSe/ZnS/Al QDs and their electroluminescence (EL) characteristics were compared, revealing significant differences in luminance, efficiency, and lifetime, which reflect the significant impact of the additional Al shell on the device performance. The device with gradient QDs with Al shells exhibited significantly higher luminance, current efficiency, and long-term stability than those of a device with gradient QDs without Al shells. The luminance and current efficiency of the Al QLED are 1.6 times higher than those of the QLED without Al shells. We show that the QLED performance can be modified by the introduction of the Al shells on the QDs in the emissive layer (EML). By analyzing the Al-passivated QDs, we confirmed not only their photostability but also the efficient carrier dynamics and improved lifetime of the Al-QLED. In order to evaluate the performances of the Al-QLEDs, we fabricated solution-processable QLEDs by employing a modification of previously reported methods 19,20 . Results structural and optical properties of the QDs. A simple synthesis method is schematically proposed to phenomenon related to CdSe/ZnS/Al 2 O 3 QDs in Fig. 1. The Al element was introduced as Al oxide into the shell of ZnS on the CdSe core and then formed a passivation layer when the surface ZnS shell was degraded under under irradiation or heating. The synthesized QD solutions consisted of CdSe/ZnS and CdSe/ZnS/Al QDs with PL peaks at a wavelength of λ = 540 nm, as shown in Fig. 2a. It is worth nothing that no significant difference in ultraviolet (UV)-visible absorption spectrum was observed between the QDs with/without the Al shells. These results indicate that there is no significant difference in the energy gap (E g ) between QDs with and without Al shells. The E g was estimated to be 2.25 eV by converting the original absorption spectrum into a graph of (ahv) 2 against hv (where a = absorbance, h = Planck's constant, and v = light frequency), and then extrapolating the straight part of the graph to the hv axis ( Supplementary Fig. S1). The Al shell QDs (CdSe/ZnS/Al) yielded a PL intensity significantly higher than that of the sample without Al. The results indicate that the Al passivation leads to a slight increase in the shell thickness, which can enhance the PL intensity of the QDs by reducing the electron-coupling effect between adjacent QDs. The relative PL QY of the QDs was measured by comparing their PL intensities with those of a primary standard dye solution (Rhodamine 6G) at the same optical density (0.05), at an excitation wavelength of 450 nm 21 . The absolute QY of QD solutions was obtained by absolute PL QY measurement system (OTSUKA Electronics, QE-2000). The PL QY results are similar to the results of the PL intensity. The Al-passivated QDs exhibit a higher PL QY than that of the bare QDs ( Supplementary Fig. S2). The highest QY was synthesized with Al overshelling for 2 h; a longer overshelling often worsens the QY (Supplementary Fig. S3). These results show that a more efficient thin-film EML is achieved without Al shelling of the QDs. Figures 1c and 2b show transmission electron microscopy (TEM) images of QDs with/without Al shells, prepared using a coating time of 2 h and doping concentration of 2 mmol. The average sizes of the CdSe/ZnS QDs and CdSe/ZnS/Al QDs were estimated to be 6.7 and 7.6 nm, respectively. Synthesized CdSe/ZnS/Al QDs with Al 2 O 3 shelling had larger sizes of 0.9 nm-thick Al 2 O 3 . Therefore, the significantly larger average size of the CdSe/ZnS/Al QDs than that of the CdSe/ZnS QDs is entirely attributed to the thicker Al shell. photostability and water stability of QD emitters. The photostabilities of CdSe/ZnS/Al QDs with different Al doping concentrations are shown in Fig. 3a and compared with that of the normal QDs. The Al shelling of the QDs was performed for 2 h. The QDs had similar sizes to avoid effects of the shell thickness on www.nature.com/scientificreports www.nature.com/scientificreports/ the photostability. The results indicate that the photostability of the QDs was significantly improved when Al 2 O 3 was shelled outside the ZnS shell. The PL intensities of all of the QDs decreased with the strong irradiation of light. The QDs without Al 2 O 3 shelling exhibited rapid PL intensity decays to 50% of the initial PL intensity after 5 h. The Al shelling with an Al doping concentration of 2 mmol yielded the most stable QDs. Their emission was maintained above approximately 80% of the initial intensity for 8 h of operation. A too large Al doping concentration degrades the intrinsic chemical stability of ZnS:Al, because unstable Al-S bonds reduce the photostability of the CdSe/ZnS/Al 2 O 3 QDs 17,22 . In order to validate the superiority of the moisture barrier of the CdSe/ZnS/ Al QDs from a more practical perspective, the QDs were mixed with water under continuous UV irradiation, as shown in Fig. 3b. The PL intensity of the QDs without Al shelling significantly decreased, and the emission peak red-shifted. The PL of QDs without Al 2 O 3 shells decreased by 68% after 60 min. UV irradiation durations longer than 60 min caused rapid turbidity increases in the QD and water mixture. Compared with the QDs without mixed water, those heated for 60 min exhibited an approximately 8 nm red-shift in the PL, while their absorption peaks were unchanged. On the other hand, the QDs with the Al shells exhibited a good water stability against a prolonged UV irradiation, maintaining 90% of their initial PL even after 2 h of UV irradiation without exhibiting the temporal spectral diffusion. The Al 2 O 3 shells are very stable in moisture and oxygen, and can act as effective barriers to prevent non-radiative recombination. www.nature.com/scientificreports www.nature.com/scientificreports/ X-ray photoelectron spectroscopy (Xps) and Fourier-transform infrared (Ft-IR) analyses. In order to verify the coordination of the Al 2 O 3 shell to the CdSe/ZnS QD surface, we analyzed the XPS results and FT-IR spectra of the materials, as shown in Fig. 4. Figure 4,b show typical XPS spectra of the two types of QDs: with and without Al shelling, respectively. After the Al shelling on the CdSe/ZnS QDs, Al 2p and Al 2 s peaks appeared at 74 and 119.3 eV, respectively: these peaks may be associated with oxidized Al species such as Al-OH or Al 2 O 3 23 . As shown in Fig. 4c, an FT-IR spectral peak around 802 cm −1 is observed after the Al shelling, related to Al-O vibrations in the Al oxide 22,24 . Additionally, the peak at 1,023 cm −1 corresponds to Al-OH in the CdSe/ZnS/Al QDs, which is consistent with the XPS results. Furthermore, we performed energy-dispersive www.nature.com/scientificreports www.nature.com/scientificreports/ spectroscopy (EDS) (Supplementary Fig. S4) shows the typical EDS of the two types of QDs, with and without Al shelling, along with their compositional results (insets). The Al percentage was calculated to be approximately 4.3%. These results confirm that the Al shell was well prepared on the QD surface. solution-processed QLeDs. In order to investigate the contribution of the Al shelling of the QDs to the QLED performance, we fabricated QLEDs with three different Al doping concentrations and without Al shelling. Schematics of our device multilayer structures comprising a patterned indium tin oxide (ITO) as the anode, poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) as the hole-injection layer (HIL), poly(N,N′-bis(4-butylphenyl)-N,N′-bis(phenyl)benzidine (poly-TPD) as the hole-transport layer (HTL), QDs as the EML, ZnO nanoparticles (NPs) as the electron-transport layer (ETL), and Al layer as the cathode, as well as corresponding energy-level diagram and cross-sectional TEM image, are shown in Fig. 5. In order to fabricate the QLEDs, we synthesized two types of QDs with and without Al shelling. Except for the Al cathode, deposited using vacuum thermal evaporation, all of the other layers were sequentially deposited on the ITO by spin-coating. The fabrication of the multilayered structure using the solution process requires the use of orthogonal solvents to ensure the integrity of the underlying layers during the deposition of the overlayers. performances of the QLeDs. The surface ligands of the QDs are important factors affecting the electrical properties of the QLEDs. In general, colloidal synthesized CdSe/ZnS QDs are capped with a long organic ligands. When formed with EML using QDs capped with long organic ligands, these organic surface ligands increase the distance between the individual QDs and exhibit insulating properties that interfere with charge injection into EML. Our as-synthesized CdSe/ZnS QDs and CdSe/ZnS/Al QDs are capped with oleic acid (OA) and 1-dodecanethiol (DDT), respectively. The thiol group capped with QDs (CdSe/ZnS/Al) is much shorter than OA capped with QDs (CdSe/ZnS) 22 . The short ligand length means that there will be reduction interparticle spacing (IPS) of QDs 19,25 . Therefore, reducing the IPS is possible to form a closely-packed EML. The effective confinement of charges from the adjacent layer in the QD EML exhibits a lower turn-on voltage of the QLEDs. The effectiveness of the Al shelling of the QDs was confirmed by measuring their EL spectra, current densities, luminances, and current efficiencies in the QLEDs. Figure 6a shows the performances of the fabricated QLEDs with an applied voltage of 6 V; no parasitic emission is observed from the adjacent layers was observed. The EL intensities of the fabricated QLEDs are approximately 546 nm. Under the same applied voltage, the EL intensity of www.nature.com/scientificreports www.nature.com/scientificreports/ the sample with Al shelling is significantly higher than that of the sample without Al shelling. The results indicate that the Al shelling can enhance the EL intensity by reducing the number of trap sites. In order to confirm the contribution of Al shelling of QDs in promoting electron injection and transport, the current densities of electron-only devices with and without Al shelling were measured (Supplementary Fig. S5). The current density of the electron-only device (ITO/Al/QDs(CdSe/ZnS/Al)/Al) is much larger than that of the device (ITO/Al/QDs(CdSe/ZnS)/Al). In the above two devices, the thickness of all layers are identical to those used in the QLEDs. This result clearly demonstrates that electron injection and transport in the devices with CdSe/ZnS/Al QDs are enhanced by the thicker Al shell, which acts as a nontrivial energy barrier against charge injection. The current densities, luminances, and current efficiencies of the QLEDs with and without Al shelling are presented in Fig. 6b- Table 1. These results indicate that QLED3 exhibited the highest performance, which is similar to the case of the photostability. The results of the QDs synthesized through the gradient method indicated that the optimized Al doping concentration is 2 mmol. In addition, they showed that a too large Al concentration worsened www.nature.com/scientificreports www.nature.com/scientificreports/ the intrinsic chemical stability of ZnS:Al and decreased the device performance owing to the Al-S bonds. The current densities of the CdSe/ZnS/Al-QD-based QLEDs were lower than that of the CdSe/ZnS-QD-based QLED. These results indicated that the shell of CdSe/ZnS/Al QDs thicker than that of CdSe/ZnS QD, indicating a weaker effective electric field and consequently lower charge injection into the EML 21,26 . However, the CdSe/ZnS/Al-QD-based QLED exhibited lower leakage currents and higher luminances characteristics, which was much higher than the CdSe/ZnS-QD-based QLED in terms of device efficiency. As shown in Fig. 6d, the current-density-dependent variations in the current efficiency indicate that under the same current flow, the radiative recombination of the electrically excited QDs is significantly more efficient in the CdSe/ZnS/Al-QD-based QLEDs compared to that in the CdSe/ZnS-QD-based QLED 27 . In our device architecture, the electron injection into the EMLs precedence over the hole injection 28 . The accompanying accumulation of excess electrons reduces the efficiency of individual QDs through Auger recombination 29 , particularly as the excitation density increases. However, the Al 2 O 3 shells acts as a barrier between the ETL and QD, balancing the injection of electrons and holes. As a result, QDs with Al shells exhibit higher efficiencies and excellent exciton generation rates. Lifetime characteristics of the QLeDs. In order to validate the superiority in device stability, we evaluated the lifetimes of QLED1 and QLED3. The lifetime characteristics of the unencapsulated QLEDs were assessed by operating the devices at a constant current of 5 mA, as shown in Fig. 7. All of the lifetime characteristics were performed under ambient conditions. The lifetime T50 (measured in hours) is the time required for the luminance to decrease to 50% of its initial value. The luminance of QLED1 deteriorated rapidly from its initial luminance of 1,000 cd/m 2 , reaching T50 after 75.6 h of continuous operation. In contrast, the luminance of QLED3 (initially 1,000 cd/m 2 ) slowly decayed, reaching T50 after 226 h. These results clearly show that QLED3 was more stable under the operation conditions and that its lifetime was almost three times longer than that of QLED1. These excellent device-stability of the QLED3 are attributed to the efficient QD passivation owing to the Al 2 O 3 shell, which serves as a physical barrier to penetration of oxygen. Discussion We developed a simple approach to improve the device stability and photostability of gradient CdSe/ZnS QDs using an outer Al 2 O 3 shell. In the operating devices based on CdSe/ZnS/Al QDs, the energy loss due to Auger recombination was considerably reduced and potentially completely suppressed at driving currents. The resulting solution-processable QLED exhibited an excellent device performance, with a maximum luminance of 57,580 cd/ m 2 , maximum current efficiency of 5.48 cd/A, low turn-on voltage (≤2 V), and operation lifetime of 226 h. The considerable improvements in the photostability and device stability were attributed to the self-passivation characteristics of the Al 2 O 3 shell, which serves as a physical barrier to penetration of oxygen and moisture. Methods synthesis of the Cdse/Zns and Cdse/Zns/Al QDs. Green CdSe/ZnS QDs with chemical composition gradients were prepared using a method reported in the literature 19,20,30 . For a typical synthesis, 0.4 mmol of CdO, 4 mmol of Zn(acet) 2 •2H 2 O, and 5 mL of OA were loaded in a 100-mL three-neck flask and evacuated at 150 °C under vacuum degassing for 30 min. After the vacuum degassing, high-purity argon (Ar) gas was purged. Subsequently, 15 mL of 1-Octadecene (ODE) were added to the three-neck flask, the temperature was increased to 320 °C, and a stock solution containing 0.4 mmol of Se and 3.0 mmol of S in 2.0 mL of trioctylphosphine (TOP) was quickly injected into the reactor at the elevated temperature. The reaction temperature was maintained at 320 °C for 10 min for CdSe/ZnS QD growth. For a typical synthesis of CdSe/ZnS/Al QDs, after the reaction for CdSe/ZnS QDs was completed, we performed the dropwise addition of a mixture of Al(IPA) 3 dissolved in DDT at 235 °C. This Al shelling was maintained at 235 °C for 2 h. All of the synthesized QDs were purified by adding
2019-04-24T14:05:55.037Z
2019-04-23T00:00:00.000
{ "year": 2019, "sha1": "1d391498bfcd8b929e506304a7ba26aac9935ec4", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-42925-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94e9c13065763487b0939ef7e6abbb9c7a251b18", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
55451532
pes2o/s2orc
v3-fos-license
Modular System Modeling for Quantitative Reliability Evaluation of Technical Systems In modern times, it is necessary to offer reliable products to match the statutory directives concerning product liability and the high expectations of customers for durable devices. Furthermore, to maintain a high competitiveness, engineers need to know as accurately as possible how long their product will last and how to influence the life expectancy without expensive and time-consuming testing. As the components of a system are responsible for the system reliability, this paper introduces and evaluates calculation methods for life expectancy of common machine elements in technical systems. Subsequently, a method for the quantitative evaluation of the reliability of technical systems is proposed and applied to a heavy-duty power shift transmission. Introduction Even though the design and development time frames of new products become shorter and shorter, it is mandatory to create a competitive product that matches customer's demands.As the products become more complex with each life cycle, reliability is a big challenge.Problems in achieving this target are reflected in the growing number of recall campaings by car companies (see Figure 1).This trend implies that the high complexity of new products is not under control, resulting in a decrease in reliability.When developing an entire new car, the subsystem "Vehicle Transmission" causes 1-10 % of the total development costs Dette and Kozub (2000).To establish a reliable forecast of the expected failure rate during the development process, tests with a large number of transmissions would be necessary.Those tests, however, are rarely considered due to costs and time.Hence, a quantitative assessment of the transmissions' reliability is required, which allows a durable and therefore cost efficient design of the components.Especially for the design of gear shafts, a very high fatigue strength is common.This kind of over-sizing causes unwanted costs, weight and additional occupied room in a transmission Naunheimer et al. (2007), which should be avoided. The initially mentioned trend in rising numbers of recalls is in contrast to the customer's demands.Customers consider reliability as the most important requirement when buying a brand new car DAT (2015).Therefore, the creation and application of methods for reliability evaluation are mandatory.The easiest way to determine the reliability of a product is the statisti-cal analysis of historic failures.Since such information is not available for new products, tools with predictive capabilities are required. Therefore, this paper proposes a method, which predicts the reliability of the whole system by using information available for the individual components of the system.In addition to the properties of the single components, influences from the environment, such as temperature and dirt contamination, are explicitly considered. Earlier works focused mostly on tests to determine the reliability of components without evaluating reliability calculation methods.In addition, mostly gears and bearings were included in reliability determinations.Other components, which are also critical for the functioning of the system, such as seals, shafts and clutches have often not been included, either because their influence was considered irrelevant or because proper calculation methods were not available.Since all components must be taken into account for a reliable prediction of the whole system, we go beyond these previous works and consider more parts of the transmission Bertsche and Lechner (2004). State of the Art Mechanical components usually fail due to fatigue and wear.Because failures caused by fatigue are usually not recognizable before they appear, it is important to have fundamental knowledge about the transmission to predict fatigue failures.From reliability view, failures because of wear are less critical but often become apparent through increasing noise or vibrations Kopp (2013).To evaluate reliability, a precise definition, as well as the knowledge of what it is caused by, is needed. Reliability is defined as The probability that a product does not fail during a defined period of time under given function and surrounding conditions Bertsche (2008). In the following, the fundamentals of reliability analyses and lifetime calculations are described. Fundamentals of Reliability Analysis Since stress and strength of mechanical components are distributed statistically, the reliability of the components follows the rules of statistics as well (see Figure 2).Failure can occur, when the actual stress is higher than the strength of a component.To achieve the highest possible reliability, the component has to be designed such that the strength is substantially higher than the maximum stress.Such a design, however, causes high costs and might be inconsistent with other design requirements (e.g.space weight) Naunheimer et al. (2007).Therefore, a good design is characterized by the right balance between economical and durability considerations. Measurand x Figure 2: Stress-strength-interference Kopp (2013) Reliability methods are used to predict the components' reliability.There are two kinds of reliability methods, qualitative and quantitative.Qualitative methods, e.g. the Failure Mode and Effect Analysis (FMEA), enable a systematic investigation of the effects of errors and failures; however, qualitative methods are not able to describe reliability changes over time.To get detailed information about how to design reliable parts and calculate maintenance costs in advance, quantitative methods are necessary. In order to apply quantitative methods, the component loads and the component's failure behavior need to be mathematically described.According to the state of the art, the lifetime of cyclically loaded components can be calculated by damage accumulation hypotheses.For that, the loads are classified into different load classes and the number of load alternations is counted.By comparing the actual number of load alternations n i with the bearable number of load alternations N i , the part damage per load level can be calculated.These part damages are then accumulated and yield the overall damage S. Whenever this damage S reaches a value (2013) of "1", the component will fail per definition Haibach (2006).Three different hypotheses are illustrated in Figure 3.A components failure behavior can be displayed as a histogram of its lifetimes.Figure 4 shows the lifetimes and the histogram of a stress test.The abscissa reflects the number of load alternations before the component fails.In the limit of a very large number of failure tests and small class widths, the empirical density function f * can be approximated by a continuous density function f .The density function describes the number of failures at the time t or after n load alternations, relatively to the total number of tests. For many considerations, however, not the number of failures at a certain point in time is relevant, but rather the number of failures that occurred during a certain time period.This quantity can be described by the distribution function F (t), usually referred to as the failure probability.F (t) corresponds to the probability with which failures happen at a time t, and can be computed from the density function f (t) according to Eq. ( 1).Yet another useful function is the survival probability function or just "Reliability", R(t), which describes the probability with which a component has survived a certain time t, see Eq. (2).Bertsche and Lechner (2004) To characterize reliability data the usage of parameters such as B x lifetime is common.B x lifetimes define the point in time after which x % of the components have failed statistically.For transmissions usually B 1 and B 10 are used.DIN 3990 (1987) Several mathematical expressions have been used to represent reliability functions.Although the normal distribution has a high overall acceptance in science, this function is rarely used for the description in reliability engineering.Here, one of the most commonly used functions is the Weibull distribution.When using this expression, the failure probability function, survival probability function and density function are given by Eq. ( 3),( 4) and ( 5).Bertsche and Lechner (2004) By changing the shape parameter b, the Weibull distribution can be used to describe many different failure behaviors, see Figure 5.For b = 1, the Weibull distribution is equivalent to the exponential distribution, for b = 3.5, it is similar to the normal distribution.The characteristic lifetime T , or scale parameter, describes the mean value of the distribution.For t = T , the failure probability is F (t = T ) = 63.21%.With t 0 an initial time frame without any expected failures can be described. Furthermore, failure of mechanical components can be divided into three categories Naunheimer et al. (2007): 1. Early failures due to faulty assembly, unsuited material or development errors.These kind of failures are not predictable and are commonly described by Weibull distributions with a shape parameter b < 1. 2. Random failures due to errors while operating the system or maintenance failures.Like early failures, these failures are not predictable.Weibull Distributions with b = 1 are suited for such failures. 3. Wearout failures due to fatigue and wear.These kind of failures are quantifiable.Therefore, they are the only kind of failures that are assessable in reliability calculations.A Weibull distribution with b > 1 represents such behavior. In the next section, the availability of methods to calculate the lifetimes of mechanical components is explained. Lifetime Calculation of Mechanical Components It is not possible or necessary to calculate a quantitative reliability for every mechanical component as the components of a transmission contribute with different weights to the overall reliability.To identify critical components an "ABC-analysis" is well suited.The ABC-analysis is a simple qualitative method and is used in this context to evaluate components in terms of their impact on system reliability and availability of calculation methods.As shown in Figure 6, the categories contain the different mechanical components of a transmission, divided by the influence on the transmission's reliability and the availability of verified calculation methods. Category A contains components that are relevant for the transmission's reliability and for which calculation methods are sufficiently accurate.Components categorized as "A" are e.g.gears, bearings and shafts."B-components", e.g.friction clutches and seals, are components whose reliability is relevant for the transmission's reliability as well; however in contrast to Acomponents, their life expectancy cannot be predicted with sufficient accuracy.Therefore, a statistical statement about the category B components is currently only possible by real life testing.Category C contains components that are not relevant for the reliability of the entire transmission and for which the lifetimes cannot be predicted.Typical elements of this category are components like housing and locking rings.Components of category A have to be divided even further to find every kind of failure.The actual failure distributions of each component cannot be calculated and have to be determined by testing.Usually, certain mechanical components have similar failure distributions. Early and random failures cannot be calculated for any of the described categories.Calculation methods for failures due to wear and fatigue for A-components and for several B-components are described in the next sections. Transmission Oil There are two different ways to take the transmission oil into consideration in the context of a reliability evaluation of a transmission.The first option treats the oil as an individual component of the system described by a separate failure probability function.In addition to that, the failures of oil can be divided into failures due to aging and failures arising from dirt contamination.The machine element oil fails whenever a predefined state is reached.An important requirement for this approach is the availability of a calculation method that is able to determine the lifetime of an oil.As a second option, the oil can be taken into account by its influence on other mechanical components.The strength of the other components depends on the current state of the oil.Of course, this kind of calculation method requires information about the dependency of the component's strength on the oil condition.For bearings, such methods are used to calculate the strength against pitting based on dirt particles in the oil DIN ISO 281 (2010).For pitting failures of gears, a method for considering aged oil is available Maisch (2012). Gears Gears are designed based on the DIN 3990 (1987).The strength of gears depends on many parameters, such as material, geometry, manufacturing process, surface and environmental conditions.Failures of gears can arise either due to tooth fractures or due to pitting.Gears can fail due to scuffing as well.But as scuffing prediction of gears is not very well advanced and scuffing usually only occurs outside of predefined operating conditions, scuffing is currently not considered in reliability calculations Boog (2011). Therefore, each gear has three different failure distributions; one for each tooth side representing pitting and one representing tooth fracture.While a tooth fracture ends the lifetime of gears immediately, pitting does not.At the beginning, pitting has only an effect on noise, wear and efficiency.Therefore, failures due to pitting are defined as the state when pitting reaches 4 % of the tooth's surface Klocke and Pritschow (2004). Bearings The lifetime calculation of bearings has been standardized by the DIN ISO 281 (2010).Dirt particles in the lubricant have a large negative influence on the bearing's lifetime.While the lifetime calculation in DIN ISO 281 (2010) takes dirt particles only with qualitative factors into account, this method can be extended to calculated factors Rombach (2009).The only kind of bearing failure currently considered for reliability calculation is pitting.By referring to DIN ISO 281 (2010), a bearings strength regarding pitting can be determined. Shafts The design of shafts is standardized in DIN 743 (2012).Shafts in transmissions are stressed by alternating torques, as well as by bending and normal stresses, which result from forces applied by helical gearings.Furthermore, the geometry of the shafts has to be considered.Shaft shoulders, grooves and radii have a large influence on the internal stresses Naunheimer et al. (2007).With the availability of computer aided calculation methods based on DIN 743 (2012), it is relatively easy to calculate a shafts lifetime. Friction Clutches Although friction clutches belong to Category B, it is possible to obtain at least a rough estimation of their lifetimes.The lifetimes of friction clutches depend on material, time in use, lubricant, temperature, age and the relative velocity and load inside the clutch.There is an unverified method available to calculate an estimation of the lifetime by considering the wear of the clutch surfaces.A clutch is considered to have failed when a predefined state of wear is reached Hensel and Pflaum (2010).As there are too many influence parameters, there are no completely verified failure distributions available. Radial Shaft Seals Similar to clutches, seals belong to category B and verified lifetime calculation methods are not available.Seals have a very complex tribological system, which is hard to quantify.A rough estimation of the lifetime of a seal can be made by considering only the temperature and the aging caused by high temperatures.For this, an unverified method is available Haas et al. (2010).However, this method requires the temperature in the frictional contact and considers only failures due to thermal degradation. System Theory The previous sections explained the determination of the lifetime and failure distributions of mechanical components.To combine those individual distributions to the lifetime and failure distribution of an entire transmission, a system theory is necessary.There are several theories available and depending on the structure of the system, the kind of the components' failure distributions and whether or not it is supposed to be repaired within its lifetime, an optimal theory can be chosen.The system transport theory is considered to be the most extensive system theory Bertsche and Lechner (2004).Unfortunately, an application of this powerful method requires Monte-Carlo-Simulations.As the implementation of a Monte-Carlo-Simulation is a very elaborate process, it has so far not been applied to a reliability evaluation of a vehicle transmission in its full extent.The application of this theory would allow a prediction of the reliability of transmissions that are repaired during their lifetime.In this paper, however, it is assumed, that the system will not be repaired, and that Boole's system theory can be applied Bertsche and Lechner (2004). In general, there are two basic ways to model the reliability structure of a technical system (see Figure 7).A serial structure represents a system without any redundant parts.If one component of a serial system fails, the whole system fails.A parallel structure represents a system with redundant parts.In this case, the system only fails, when all components have reached the end of their lifetime.By combining these two basic structures, it is possible to define the reliability structure of a complete technical system.Applying Boole's theory, the system reliability can then be calculated based on the individual component reliabilities according to Eq. ( 6) for serial structures or Eq. ( 7) for parallel structures.As can be observed in Eq. ( 6), the number of components is significant for the system's reliability and the failure probability grows exponentially with an increasing number of components.In contrast, the reliability of parallel systems increases with a rising number of redundant components, see Eq. ( 7). Method for Quantitative Reliability Evaluation of Technical Systems After having described the necessary basics and the chosen system theory, a detailed method for the quantitative evaluation of system reliability is proposed in Figure 8. 1.As a first step, all relevant mechanical components of the system have to be identified.For this, a qualitative method like a FMEA is applicable.In this analysis, not only the most important powertransmitting components, such as gears and bearings, need to be evaluated, but also less obvious (and often neglected) components have to be taken into account, such as shafts, seals, housings and the lubricant.Knowing the function of all components and how they interact with each other is crucial for the reliability evaluation. 2. As mentioned earlier, not all mechanical components or their kind of failures have the same influence on the system's reliability.To determine which failures of mechanical components are critical for the system, a qualitative analysis of the components, such as an ABC-analysis, is necessary.Although verified calculation methods for components such as friction clutches and seals in Category B are not available, several unverified methods have been proposed for calculating basic lifetimes.As clutches and seals are very complex, the prediction of their lifetimes is only possible on a very approximate level, and are currently suited for rough estimations only.Nevertheless, they are applied in this evaluation to take those components into account. 3. In the next step, the reliability structure has to be created.A reliability structure displays whether or not a system contains redundant components, and can be defined by a combination of serial and parallel structures. 4. To determine the loads on the system, a typical load cycle is necessary.This cycle can be determined either by an actual measurement or by a simulation.Actual measurements are preferable as they best represent reality, but simulated load cycles are usually sufficient for an evaluation Buck (1973). 5. The loads of each individual component have to be defined in this step, which is found through analytical or numerical calculations based on the external loads.To accomplish this, information about the dimensions of the system components are required.If the loads are functions of distance or time, these dependencies have to be considered explicitly and the loads need to be available as a function of their individual revolutions or load alternations. 6. To quantify the component loads, it is necessary to classify the loads and transfer them into specific component load spectra.Previous work suggested that 16-64 numbers of classes are sufficient Bertsche and Lechner (2004).For gears and seals, the time-at-level procedure has been found to be suited best Renius (1977), while for shafts the Rainflow method is usually applied.8.For the determination of the actual lifetime, the calculated stresses and strengths need to be connected by a suitable damage accumulation hypothesis.By applying such hypotheses, the damage for each individual component can be calculated for the provided load cycle, and the residual lifetime can be calculated. 9. The related failure distributions for the components can be determined by tests or by historical data.It has been shown, that the failure distributions of mechanical components are usually described by Weibull distributions with shape parameters b > 1.The ranges of the Weibull Parameters for different components and failures are given in Bertsche and Lechner (2004). 10.When the lifetimes and failure distributions for all components are known, the overall reliability and the failure probability function can be calculated.To do so, a system theory has to be selected.Among multiple different theories, Boole's Theory is the most suitable one to determine the reliabil-ity of non-repairable systems.The availability of the system is equivalent to the reliability. Application of Method After having described the method for reliability evaluations, this method is applied to a heavy-duty power shift transmission with an input power range of 67 -97 kW.This transmission can be used for utility vehicles such as forklifts, wheel loaders and dumpers.The input torque at the torque converter can be around 800 Nm.The shifting between the three speeds for forward and reverse drive occurs without power interruption.To achieve that, the transmission contains five friction clutches (see Figure 9).Engaging and disengaging clutches FWD and RWD switches between forward and reverse gear.In combination with the clutches 1ST, 2ND and 3RD a speed is selected.Therefore, there are always two clutches engaged, and the remaining three are disengaged, see Figure 9. Furthermore, the transmission contains ten gears that are always engaged, 18 bearings, seven shafts and two radial shaft seals.For lubrication, a mineral oil with a kinematic viscosity of ν 40 = 100 mm 2 s is assumed.The oil temperature is defined as a constant of 80 • C. As discussed earlier, the oil can be considered by two different options.Here, the oil is taken into ac- Figure 9: Torque plan count as an influence on the mechanical components, and not as an individual component.The external loads have been calculated by a simulation of a driving cycle.The assumption for the simulation is a fictional dumper with a driving power of 95 kW that is used in a quarry.The dumper has an empty weight of 10 t and a payload of additional 10 t.The driving cycle (Figure 10) includes a driving distance of about 6,000 m, which takes 12 minutes.The dumper is loaded in the bottom of the quarry, then drives uphill out of the quarry, travels a certain distance straight, unloads and goes back the same way.In addition to the driving resistance caused by the slope, a rolling resistance of 2 % of the vehicle's weight has been added.The resulting traction and velocity is simulated based on measured data of the elevation-time-course of a real dumper driving in a quarry.During the simulated driving cycle, only four of the six transmission speeds are used.During 83.4 % of the distance, the third forward gear is engaged.The second forward gear and the third reverse gear have both a time-share of 8 %.The first forward gear has a share of only 0.6 %.Nevertheless, all of the gears are under load at least once throughout each driving cycle. It is assumed that the transmission is not repaired so that the first failure of a component ends the transmission's lifetime.With this assumption, the Boole's theory is applicable to calculate the system's reliability based on the failure distributions of the components.The components' lifetimes are calculated based on the external conditions of the transmission.In addition to the components in category A, bearings and gears, for which verified methods are available, the lifetimes of the seals and friction clutches are calculated by using unverified calculation methods. For the estimation of the lifetime of the seals, the temperature-dependent Arrhenius-model is used.For this calculation model only the oil sump temperature T K and empirical material factors are necessary.When using the usual seal material Nitrile Butadiene Rubber, the lifetime of a seal can be calculated according to Eq. ( 8) Haas et al. (2010). The lifetime of friction clutches can be approximated by assuming that the wear volume is proportional to the friction energy.By using an empirical materialdependent friction coefficient f F ric , the duration per switch operation t i , number of switch operations j per driving cycle, the relative turning speed n i and the clutch torque M K , the wearing volume V of a defined driving cycle can be determined (see Eq. 9). The clutch is considered failed, as soon as the wear volume reaches a predefined threshold V limit Niemann et al. (2004). After having determined the lifetimes of the components, the individual failure distributions can be built by referring to existing Weibull parameters (see Table 1).The initial period of time without failure t 0 is shown relative to the B 10 -lifetime.The characteristic lifetime T is calculated by: with: For the failure distribution of the seals a Weibull distribution with a form parameter b = 1.85 has been assumed, Haas et al. (2010).Due to the unavailability of a distribution for the friction clutches, a Weibull distribution with a form parameter b = 3.5 has been defined.This distribution is similar to a normal distribution.Because the gear failures have been divided into pitting on both sides of the teeth and tooth fracture, the reliability structure contains 62 elements.As the analyzed transmission does not have any redundant parts, the reliability structure is completely serial. Based on the geometry of the mechanical components, the external loads can be split and assigned to each individual component.Thus, each component has a load-time plot that has to be converted into a loadrevolution-plot to be quantified by a load spectrum with 16 classes.For the amount of particles in the oil, a linear variation in time of the oil particle factor e c has been added.This affects the strength of bearings.It is assumed that the amount of particles increases over the lifetime.The manufacturer of the transmission requires a change of the oil filter after every 500 hours of usage.The change of the oil filter is reflected in the calculation by a drop of the oil particle factor. The resulting failure probability functions for both the entire power shift transmission and the different component groups are displayed in Figure 11.The radial shaft seals seem to be the critical components, having the biggest influence on the system's reliability.As the reliability calculation of the seals is fairly unverified, it might display the transmission as less reliable than it actually is.Bearings and friction clutches have a similar reliability.The gears, on the other hand, seem to be oversized, and are only slightly contributing to the failure probability of the entire transmission.The lifetime calculation of the shafts based on the external loads has been done according to DIN 743 (2012).According to the results, the shafts are heavily oversized and do not have any influence on the reliability of the transmission.Focusing only on the components in category A, the bearings seem to be the critical mechanical components.The most critical bearing is the one near the output of the transmission (B15), as it is stressed all the time with high loads, see Figure 9. Theoretically, several bearings have an infinite lifetime since they are only put under load when their relative turning speed between outer and inner ring is zero (B4, B9, B10, B13 and B14). The mean lifetimes of the component groups are displayed in Figure 12.The lifetimes of the components within the groups "'gears"' and "'bearings"' have a wide distribution.This is another indicator that the methods currently in use are not sufficient for an economic design process, as the target is to design all components with the same life expectancy. A reason for the different lifetimes between the component groups might be the fact that the lifetime calcu- lation of bearings is fairly easy, as it is documented in engineering standards.For shafts and gears, only the calculation of their strength is described in engineering standards and the lifetime calculation has to be done manually by comparing stress and strength using appropriate damage accumulation hypotheses.Because of this, there might be a tendency to oversize components rather than spending time on reliability evaluations.An oversizing of 10 % causes in general only 10 % additional costs, but can double the lifetime of a transmission Naunheimer et al. (2007).However, for an economic design of a transmission, this is not an appropriate method. In Figure 13, the influence of the driving cycle on the lifetime of the transmission is illustrated.As demonstrated, a reduction of the driving time between loading and unloading by 50 % reduces the lifetime of the transmission by 40 % since the amount of higher loads compared to the whole cycle increases.This shows the importance of the selection of a well-suited load cycle which represents the real use case of the vehicle. Conclusion Ensuring reliability of products becomes more and more important due to higher product complexity and customer demands.The qualitative reliability methods currently in use do not seem to be sufficient to achieve the target of designing systems with a sufficient reliability.Therefore, an improved method for the evaluation of system reliabilities has been proposed.This method provides a step-by-step guideline on how to identify critical mechanical components in a technical system and how to determine lifetime and failure probability functions of the individual components.It covers common mechanical components in transmissions and takes environmental influences such as aging, temperature and dirt particles in the lubricant into account. The method has been applied to a real vehicle transmission of a fictional quarry-dumper.The results reveal that the seals, clutches and bearings and especially the bearing near the output of the transmission, seem to be the critical components, while the gears and shafts seem to be heavily oversized. Based on such results, it is possible to design systems in a more economical way, without the risk of a decreasing reliability.The development costs can be reduced as only a small number of tests is necessary for reliability evaluations.In addition, the proposed method can generate knowledge about the current system state based on the individually experienced load history.Hence, unplanned system breakdowns can be avoided and predictive maintenance strategies can be developed.Furthermore, because the main causes of failures can be predicted, selective condition monitoring concepts can be applied. In the future, more verified calculation methods for the reliability evaluation of components used in technical systems should be developed so that the accuracy of future reliability evaluations can be improved.Additionally, in order to calculate repairable systems, it is also necessary to find more efficient algorithms to solve the equations of more complex system theories. Figure 1 : Figure 1: Number of recall campaigns of cars in Germany from 1998-2014, data from DAT (2015) Figure 7 : Figure 7: Reliability structure a) serial b) parallel 7.Figure 8 : Figure 8: Method for reliability evaluation of technical systems Figure 10 : Figure 10: Driving cycle of a fictional dumper in a quarry, data from Rebholz et al. (2014) Figure 11: Failure probability Figure 13 : Figure 13: Influence of cycle length Table 1 : Weibull parameters of failure distributions
2018-12-12T01:32:26.977Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "c3eb2a5006fe324f6c3b37d4b92038a6ccb9c993", "oa_license": "CCBY", "oa_url": "http://www.mic-journal.no/PDF/2016/MIC-2016-1-2.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7d4e27f92b65358ef49de7736f79bb259bdd2276", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
18852417
pes2o/s2orc
v3-fos-license
Relationship of variable region genes expressed by a human B cell lymphoma secreting pathologic anti-Pr2 erythrocyte autoantibodies. To study the biology of cold agglutinin disease we previously established EBV-transformed B cell clones isolated from a patient with splenic lymphoma of an early plasmacytic cell type and immune hemolysis due to an anti-Pr2 cold agglutinin. These clones had an aberrant chromosomal marker identical to the patient's B cell lymphoma and each secreted IgMk anti-Pr2 similar to the pathologic autoantibody in the serum of the patient. In this study, we have further investigated the Pr2-specific autoimmune response through nucleotide sequencing of VH and VL region genes. We have shown that the seven clones share the same VDJ/VJ gene segments and junctional elements confirming their clonal origin. The VH sequences were 88% homologous to a VHI germline gene while the VL sequences were 97% homologous to a VkIII germline gene. Only 4 somatic mutations (3 silent and 1 conservative) were found in greater than 5,000 bp sequenced, suggesting that a low mutation rate existed. Based on a tumor mass of 10(12) cells and a minimum of 40 divisions, we estimated the somatic mutation rate to be 4.45 x 10(-5) m/bp/d. This somatic mutation rate is similar to those estimated for acute lymphocytic leukemia (pre-B cell) and chronic lymphocytic leukemia (intermediate B cell), but significantly lower than the mutation frequency in follicular lymphomas (activated B cell). We propose that the difference in somatic mutation frequency of a B cell tumor may be related to the stage of B cell differentiation. In addition, the low mutation frequency observed in the Pr2-specific B cell tumor may also reflect, in part, selection by autoantigen to conserve sIg structure and specificity. Many autoimmune diseases are associated with autoantibody formation. Yet in only a few instances has it been clearly established that the autoantibody associated with the disease contributes to the pathogenic process. Cold agglutinins are autoantibodies that preferentially bind to RBC membrane antigens at low temperatures (1)(2)(3). Cold agglutinins may be pathologic, found in association with cold agglutinin disease (CAD)' or benign, found in the sera of healthy individuals . The antigenic specificities of benign and pathologic cold agglutinins are similar and include various glycolipids and glycoproteins (1,4) . CAD has been classified as either idiopathic (i .e ., not associated with an underlying disease) or secondary to lymphoid neoplasms or infections (2). In cold agglutinin disease, whether idiopathic or secondary to lymphoid neoplasms, cold agglutinins are typically IgMK monoclonal autoantibodies, as defined by a homogeneous peak on the serum electrophoretic pattern. To further examine the clonality and cellular origin of pathologic cold agglutinins we have studied B cell clones from a patient (RR) with a splenic lymphoma associated with immune hemolysis due to an anti-Pr2 cold autoantibody (5). Cytogenetic studies of splenic lymphocytes demonstrated an abnormal karyotype (51, XX, +3, +9, +12, +13, +18) . After EBV transformation of splenic lymphocytes, seven clones were isolated ; each clone had the same abnormal karyotype and secreted an IgMtc anti-Pr2 cold agglutinin . Further studies of surface phenotype, Ig gene rearrangements, and antibody specificity suggested that the EBVtransformed clones secreted an RBC autoantibody that was identical to the pathogenic autoantibody causing immune hemolysis in the patient (6) . However, while the IEF spectrotypes of the autoantibodies derived from five of the seven B cell clones were identical to the cold agglutinin isolated from the patient's serum, two of seven clones had distinctive spectrotypes . This finding indicated that the EBVtransformed B cell clones were structurally heterogeneous, even though they retained the same autoreactive specificity. Idiotypic heterogeneity of human follicular lymphomas has been ascribed to a high rate of somatic mutations in the V region genes (7)(8)(9)(10) . If a similar somatic mutational process were taking place in the autoreactive tumor described here, we would predict that sequence variants could account for the observed spectrotypic diversity. In this report, we have defined the molecular basis of autoantibody heterogeneity of B cell tumor origin through nucleotide sequence analysis of both heavy (V) and light (V,) chain variable region genes . By comparing the junctional sequences formed by the joining of V , D, J., as well as VK and JK gene segments, we have evaluated the clonality ofthe seven clones at a molecular level . Additionally, a comparison of the V region sequences has allowed us to examine the frequency of nucleotide substitutions and to determine ifthe deduced amino acid sequences can explain the different IEF spectrotypes observed . cDNA Synthesis and Cloning. Double-stranded cDNA was made by the method described by Gubler and Hoffman (13) as modified for Ig genes (14) using 5 l~g of LS series RNA . Blunt-ended double-stranded cDNA was ligated into the phosphatased SmaI site of phagemid pBS M13-(Stratagene, San Diego, CA). Escherichia coli strain bSJ72 was transformed by recombinant phagemid DNA. Even though a highly enriched library containing human immunoglobulin genes was generated, it was necessary to screen the partial library with V (15) and V,, (16) gene probes . Northern blot analysis of isolated RNA (data not shown) indicated that the expressed V genes belonged to the VHI and VKIII families. Agarose gel purification of probes proved essential since plasmid DNA would crosshybridize to the phagemid DNA on the filters . Sequencing of cDNA Clones. After clones were identified as containing a human heavy or light chain cDNA insert, the phagemid clones, which contain the M13 origin of replication were grown with helper phage K07 (17) to rescue ssDNA . Since the pBS M13 -vector yields negative stranded ssDNA, the M13 reverse primer was used for sequencing. cDNA sequences were determined with the method of Sanger (18) Statistical Analysis of Observed Somatic Mutations . Seven cells produced a pattern of mutations having three cells either unmutated or with identical mutations and four cells each having its own unique set of mutations . This 3-1-1-1-1 pattern was used to determine a maximum likelihood estimate (MLE) as well as confidence bounds on the underlying mutation rate. This was done by simulating the clonal proliferation process for this situation by adapting a previously developed general Monte-Carlo model (20) . The computer model of cell proliferation starts with one cell and produces a random succession of left or right progeny until a total of 40 divisions is reached (Fig . 1 A) . Then the same tree is restarted picking left or right random progeny as before. This time the tree is merely followed if the chosen left or right progeny already exists. If the tree fails to contain the requested random branch, then a new cell is added in that direction and more new cells are again added until the 40th division is reached . This is continued until seven strands of the tree have been extended to 40 divisions . Note that the tree would contain 240 (1 .1 x 10' 2 ) cells at its last level if completed, but our simulation generates only seven . These seven are a random selection of the entire set of 2 4°. Each new cell acquires a random number of mutations determined from the Poisson distribution. The parameter or the mean of the Poisson distribution is the mutation rate . The mutation rate is defined by the user and we used a range to investigate the ability of clonal development to create the observed pattern of mutations . Since each cell contains a large number of independently mutating bases, each with very low probability (see Results), the Poisson distribution is appropriate for the number of mutations per cell per division . The chance of silent, neutral, or defunctionalizing mutations has been determined both theoretically and empirically (20, Shlomchik, M . J ., S . Litwin, and M . Weigert, manuscript in preparation) . The probability of each type is taken into account in generating the tree so that the likelihood of a lineage is determined . If a pathway incurs a lethal mutation, then the program inserts a dead cell in the tree . The creation of dead cells puts blockages in the tree structure, since dead cells don't proliferate . Subsequent passes to generate the needed seven cells may encounter a dead cell . If this happens the program restarts the pass . In the event that no path to 40 divisions exists by virtue of dead cells, the entire tree is aborted . When seven cell lines have been extended to 40 divisions, the clonal identity of each cell is determined. Two cells in the final division are regarded as identical if and only if they have a common ancestor from which neither has mutated . The possibility that two final division cells contain identical, independently derived mutations is ignored since it is so improbable. However, with a little additional programming individual base mutations could be recorded and this possibility taken into account . The program requests the user to specify a mutation rate, then it determines if the observed pattern is likely to occur. It does this by simulating the process of cell proliferation 1,000 times, all at the same mutation rate. In each repetition the program checks ifthe seven cells alive at division 40 match the observed pattern. At the conclusion ofthe 1,000 repetitions the number of times the pattern was observed is tabulated. If the seven simulated cells make up 3 identical ones, i.e., each with either no mutations, or the same set of mutations, and four others that are each unique, then this simulation matches the observed outcome. The number of occurrences of this pattern among the 1,000 repetitions divided by 1,000 is an estimate of the probability of the observed outcome for this value of the mutation rate. By running the program several times, entering different mutation rates each time, we can determine what rate gives the biggest tally, and hence biggest estimate of the probability of the observation . The 1,000 trials are regarded as Bernoulli trials, and we are estimating the chance of success, in this case, the chance of the original observation of 3-1-1-1-1 . The probability for success in a trial is the likelihood function that is to be maximized . It depends on the mutation rate, and would be conditioned on the fact that trials that are aborted should not be considered, since they cannot account for the observed data. It would also account for the patterns of all possible cell lineages after 40 divisions . We note that trial abortion is very rare at the mutation rates we are using (see Results), namely p (aborted tree) <0.00178 (computation not shown). However, we can investigate the underlying process without explicitly knowing the likelihood function by using a computer simulation. When we run the program at a very low mutation rate (0.001 mutations per cell per division) it rarely produces five different clones, whereas at a high mutation rate (0.15 mutations per cell division) it produces six or seven different clones. Thus we adjust the mutation rate to an intermediate value so that the program reflects the observed data. By applying a series of different values for the mutation rate we identify one that maximizes the program's production of mutational patterns similar to that observed. This rate is our approximation to the maximum likelihood estimate of the true rate. Three similar statistics are collected . First, the outcome is tested for being identical to the observed 3-1-1-1-1 pattern . Next, the outcome is tested for containing at least five clones, finally it is tested for containing no more than five clones . The last two statistics are used to determine -95% confidence bounds on the true mutation rate. These three statistics are tallied and the entire procedure is repeated 1,000 times. Finally, the program outputs the number of times the observed mutational pattern was obtained as well as the tallies of the other two statistics . Results Nucleotide Sequences of VH and K Regions. The V and V, region gene segments of the seven EBV lines were cloned as cDNAs by extending an oligonucleotide primer homologous to the 5' region of the human heavy and light chain constant regions (Figs . 2 and 3) . The use of the 1. chain primer has been previously reported (8,9). The choice of the K primer was based on identifying a human CK region that was homologous to an evolutionary conserved CK sequence from the mouse (21). The is primer consisted of a 21mer, 18 nucleotides from the 3' end of the J segment of a rearranged gene. Two or more independent cDNA clones were isolated and sequenced because of concern for sequencing or cloning artifacts . Clones were determined to be of independent origins on the basis of different cDNA sizes and orientation of inserts. Sequences of cDNA clones isolated from a particular LS cell line were all identical indicating high fidelity ofthe reverse transcriptase . Of the several thousand nucleotides sequenced, no mutation due to polymerization of the reverse transcriptase was seen. Furthermore, to assess for mutations occurring due to tissue culture, cDNA clones of cell lines LS2 and 5 were isolated and sequenced at 6-mo intervals; no changes in nucleotide sequences were seen among repeat isolates. The nucleotide sequences of the seven V and VI genes are shown in Figs . 2 and 3, respectively. It is evident that except for a few nucleotide substitutions, the sequences are almost identical. Identical V.-D., D.J., V,-J,, and N gene segments are all consistent with a clonal origin . Clonality is further supported by the unusual karyotype associated with the tumor and EBVtransformed cells, as well as the Southern blot analyses of heavy and light chain loci showing identical Ig gene rearrangements (6). Although the sequences are nearly identical, several nucleotide substitutions are seen that could only be attributed to somatic mutations if all cells were in fact derived from a single progenitor cell . Single T substitutions at amino acid positions 72 (LS4) and 78 (LS1) in the heavy chain gene and position 92 (LS6) in the light Nucleotide sequences of VL regions from anti-Pr2 EBV clones. Amino acid translations is given above nucleotide sequence and numbered accordingto reference 22 . Complementarydetermining regions (CDR 1 and CDR2) andJ4 gene segment areas indicated. Differences from the consensus sequence (LSI) are indicated. Homology of the nucleotide with the consensus sequence on the top line is shown as a dash . The K oligonucleotide sequence used to prime cDNA clones is indicated with an asterisk. These sequence data have been submitted to the EMBL/Gen-Bank Data Libraries. chain gene all are silent changes and produce no predicted amino acid changes. The only nucleotide substitution that changes the predicted amino acid sequence is at amino acid position 30 (LS8) in the heavy chain gene . The change (Thr-Ser) is considered a conservative substitution since both have aliphatic hydroxyl side chains and no change in the net charge of the deduced protein is expected . The gene sequences were compared for homology to previously published V and V K genes and in particular to other human RBC autoantibody sequences. The V sequences belong to the V1 I gene family as demonstrated by Northern blot analysis (data not shown) and were found to be 88% homologous to a germline Vt gene (Fig. 4 A) (22) . The LS light chain sequences were 97% homologous to a germline VAilt gene (Fig. 4 B) (23) . We have surveyed the D gene segments known to date and have not found identical sequences (24,25). The V region gene sequences of the LS autoantibodies differ from another human anti-Pr2 RBC autoantibody, which by NH2-terminal sequencing uses V III and VKIV family genes (26) . The use of different V genes by anti-Pr2 autoantibodies suggests that the human response to this antigen is not highly restricted . Confidence Intervalfor the Observed Mutation Rate. To estimate the mutation frequency, we assumed that a tumor mass of at least 10 With the mutation rate of 0.031 mutations per cell (4.45 x 10-5 m/bp/div) per division, the chance ofthe observed pattern was maximized and was about 27 %, i.e., the 1,000 repetitions produced 266 outcomes each of which contained a set of three identical cells plus four completely unique cells (Fig. 1 B). This is the MLE for the mutation rate. Setting the mutation rate to 0.0085 mutations per cell per division (1 .22 x 10 -6 m/bp/div) the chance of observing at least five clones was reduced to 0.028 and by setting it to 0.12 mutations per cell per division (1.72 x 10-4 m/bp/div) the chance of observing five or fewer clones was reduced to 0.023 . Thus an approximate 95% confidence interval for the mutation rate is (0.0085, 0 .12) mutations per cell per division and (1.22 x 10 -6 , 1 .72 x 10 -4 ) mutations per base pair per division . Our estimate of tumor cell mutation rate rests on the assumption that tumor cells do not die. Thus, it must be taken as an upper band until better approximations to tumor cell birth and death rates are available. Discussion The humoral immune repertoire is produced by B lymphocytes that have the ability to respond to a wide range of antigenic specificities. The differentiation of B lymphocytes can be divided into two stages . The first stage is antigen independent in- volving the development of stem cells into B cells. After acquiring antigen binding receptors, B cells migrate to the peripheral organs such as lymph nodes and spleen where they encounter various antigens . The second stage involves the proliferation and subsequent differentiation of B cells into Ig-secreting plasma cells. Since autoantigens are ubiquitous, it follows that autoimmune responses must be controlled by regulatory mechanisms such as clonal deletion and/or T cell suppression . Although autoantibodies are associated with many autoimmune disorders, their role in the pathogenesis of disease in most cases is unclear. In contrast, the pathologic role of RBC autoantibodies in immune hemolysis is well established (1). To study the biology of these pathogenic RBC autoantibodies, we have established seven EBVtransformed B cell clones from a patient (RR) with splenic lymphoma and immune hemolysis due to an anti-Pr2 RBC autoantibody. Previously, we had demonstrated that these EBVtransformed clones secreted the same pathogenic autoantibody as present in the serum of the patient. The sequence data of this report confirm the clonal relatedness of these lines. By IEF analysis, however, two of seven clones secreted autoantibodies with different spectrotypes . The different IEF banding patterns could be explained by a post-translational event, such as glycosylation, or by somatic mutation of the primary nucleotide sequence . Altered glycosylation could result from a change in primary sequence leading to the appearance or disappearance of glycosylation sites, or by variable glycosylation at a particular amino acid without a change in the primary amino acid sequence (27) . The observed somatic mutations in the LS cell lines are either silent or conservative and thus do not account for any charge differences . Although the conservative, serine to threonine substitution (LS8 ; V.; position 30), occurred in one of the autoantibodies with a different spectrotype, this somatic mutation did not involve an Asn-X-Ser/Thr recognition sequence required for N-linked glycosylation (27) . 0-linked glycosylation of a serine residue in the V region has been reported only once in an abnormal human myloma X light chain (28) . It is therefore unlikely that the two distinctive spectrotypes result from the nucleotide substitutions found in the V and V,, region genes of the seven clones . Variable glycosylation and/or mutations in the Ig constant regions are alternative causes for spectrotypic differences and cannot be ruled out. The small number of somatic mutations, i.e., only 4 in >5,000 bases sequenced, shows that at the time of sampling, a low somatic mutation frequency existed in this B cell tumor population . One can speculate that the mutation rate may have been high at one point but then slowed or even stopped. Alternatively, a high somatic mutation rate may never have existed . The idea that the mutation rate can vary in the expansion and maturation of a B cell clone is supported by data from murine cell lines with various specificities (including autoantigens) and representing different stages of B cell differentiation (20,29,30). Mutations appear to be infrequent in the preimmune repertoire and primary immune response (estimated at <10-5 m/bp/d) (31,32). However, during subsequent steps of B cell maturation, characteristic of secondary immune responses, somataic point mutations are introduced in a stepwise fashion at a rate approximating 10 -3 m/bp/d (33,34). At later stages of B cell differentiation, as demonstrated in studies of transformed plasma cells, somatic mutations are considered to occur again at a lower rate (estimated mutation frequency between 10 -6 and 10' m/bp/d) (35) . The type of lymphoma analyzed in this report differs in several aspects from other B cell malignancies of which V regions have been analyzed (36,37). First, the B cell tumor of the patient in this report consists of early plasmacytoid cells, which represent a more mature stage of B cell differentiation than the types involved in acute lymphoblastic leukemia (pre-B cell) (38), chronic lymphocytic leukemia (immature-mature B cell) (39), and follicular lymphoma (follicular center cell, activated B cell) (8,9) . Second, the B cell tumor described here, is unique in that its specificity is well defined. Based on the available V region sequences from these four different types of B cell lymphomas, a correlation is proposed between the stage of B cell ontogeny and the estimated mutation frequencies (see Fig. 5) . In a recent study of VH sequence analysis from patients of acute lymphoblastic leukemia (ALL), no somatic mutations were found in >15,000 nucleotide sequences (38) . Based on this finding it was calculated that the prevalence of mutations in these ALL tumors was <6 .7 x 10 -6 m/bp/d . In the two cases of CLL, no evidence for sequence heterogeneity of expressed V genes was observed ; the V sequences isolated from two unrelated individuals are highly homologous to each other and to a previously published germline FIGURE 5 . Hypothetical correlation of differentiation stage of different B cell tumors and somatic mutation rate. 1. ALL; pre-B cell ; low somatic mutation rate estimated (39) . II . CLL; immature-mature B cell ; low somatic mutation rate estimated (40,41). III. Follicular lymphomas; activated B cell, likely of follicular center cell origin ; somatic mutation rate is high (8,9). IV Well-differentiated lymphoma; anti-Pr2 secreting B cell lymphoma, early plasmacytoid cell type; low somatic mutation rate estimated (this paper) . Solid line indicates somatic mutation rates estimated for four types of B cell lymphomas representing different stages of B cell differentiation. The dotted line reflects the rare occurrence of somatic mutations in non-Ig variable region loci (32). V sequence (39,40). In contrast, a relatively high frequency of somatic mutation occurs in follicular B cell lymphomas similar to the mutational process found in normal differentiating B cells (8,9) . The mutation rate has not been determined for these human B cell tumors, because the number of cell divisions is not known . However, with some assumptions we have estimated the mutation rate of the lymphoma described in this report, to be 4.45 x 10 -5 (see Results) . Similarly, a mutation rate has also been estimated for cases of ALL. These estimated rates imply that somatic mutation rates that are significantly lower than those found in activated B cells would occur in lymphomas representing both earlier and more mature stages of differentiation (Fig . 5) . We next determined if the prevalence of somatic mutations was significantly different among the various types of lymphomas by comparing the frequency of silent mutations found in the previously published VH and VL sequences (8,9,(38)(39)(40) . Only silent mutations were considered in order to exclude any bias due to selection. Using Fisher's exact test for the hypergeometric distribution, the frequency of silent mutations in the plasmacytoid B cell lymphoma (3/4,992 by sequenced) was significantly lower than in follicular lymphoma (16/2,124 by sequenced; p < 0.00001), but was not significantly different (p > 0.5) from the observed frequency in ALL (0/15,000) and CLL (0/1,428) (39) . This difference can be explained as either ahigher mutation rate in follicular lymphomas or by the possibility that these lymphomas represented a larger number of cell divisions allowing for a greater number of observed mutations. The result of this statistical analysis also fits with the proposed model illustrated in Fig. 5. Thus, this model proposes that B cell tumors with low somatic mutation frequencies will include cases of ALL and CLL, consisting of pre-B and intermediate B-cells, as well as lymphomas representing plasmacytic, more mature stages of B cell differentiation . The role of exogenous and autoantigens in clonal selection and expansion of antigenspecific B cell during an immune response has been studied in several animal model systems (28)(29)(30). Although binding of sIg with autoantigen (i .e., Pr2) may be important in driving a specific B cell clone to expand, other secondary factors such as increased oncogene expression are likely to contribute to the malignant transformation of these autoreactive B cell clones (41,42). This view of lymphoma development, where clonal expansion and malignant transformation are separate and independent events, is supported by the observed clinical spectrum of cold hemagglutinin disease (1). At one end of the spectrum are patients with an expanded B cell clone producing a monoclonal cold agglutinin, identified as a homogeneous band on serum protein electrophoresis; these patients have no evidence for lymphoma and are diagnosed as having idiopathic cold hemagglutinin disease. At the other end of the spectrum are patients with the secondary form of cold hemagglutinin disease, whose expanded B cell clone has undergone malignant transformation; these patients present with or eventually develop clinical lymphoma . In summary, the low mutation frequency observed in this Pre-specific B cell tumor, may not only be related to the tumor differentiation stage but also may reflect selection by autoantigen to retain Ig structure and specificity. The conserved nature of sIg receptors expressed by B cell tumors of this type would predict a good response to antiidiotype therapy (43) . Additional studies of V genes from different types of B cell lymphomas, representing various stages of differentiation, will contribute to understanding the biology of B cell neoplasia and also define the potential for passive immunotherapy. Summary To study the biology of cold agglutinin disease we previously established EBV transformed B cell clones isolated from a patient with splenic lymphoma of an early plasmacytic cell type and immune hemolysis due to an anti-Pr2 cold agglutinin . These clones had an aberrant chromosomal marker identical to the patient's B cell lymphoma and each secreted IgMk anti-Pr2 similar to the pathologic autoantibody in the serum of the patient . In this study, we have further investigated the Pr2specific autoimmune response through nucleotide sequencing of VH and V,, region genes . We have shown that the seven clones share the same VDJ/VJ gene segments and junctional elements confirming their clonal origin . The V sequences were 88% homologous to a VHI germline gene while the V, sequences were 97 17o homologous to a VkIII germline gene. Only 4 somatic mutations (3 silent and 1 conservative) were found in >5,000 by sequenced, suggesting that a low mutation rate existed . Based on a tumor mass of 1012 cells and a minimum of 40 divisions, we estimated the somatic mutation rate to be 4.45 x 10 -5 m/bp/d. This somatic mutation rate is similar to those estimated for acute lymphocytic leukemia (pre-B cell) and chronic lymphocytic leukemia (intermediate B cell), but significantly lower than the mutation frequency in follicular lymphomas (activated B cell). We propose that the difference in somatic mutation frequency of a B cell tumor may be related to the stage of B cell differentiation . In addition, the low mutation frequency observed in the Pr2-specific B cell tumor may also reflect, in part, selection by autoantigen to conserve sIg structure and specificity.
2014-10-01T00:00:00.000Z
1989-05-01T00:00:00.000
{ "year": 1989, "sha1": "bc0e04580767cd8013b4e3f8e488aeec083eba21", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/169/5/1631.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bc0e04580767cd8013b4e3f8e488aeec083eba21", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4607397
pes2o/s2orc
v3-fos-license
Recycling Waste Soot from Merchant Ships to Produce Anode Materials for Rechargeable Lithium-Ion Batteries In this study, the waste soot generated by ships was recycled to produce an active material for use in lithium-ion batteries (LIBs). Soot collected from a ship was graphitized by a heat treatment process and used as an anode active material. It was confirmed that the graphitized soot was converted into a highly crystalline graphite, and was found to form carbon nano-onions with an average diameter of 70 nm. The graphitized soot showed a high discharge capacity and an excellent cycle life, with a reversible capacity of 260 mAhg−1 even after 150 cycles at a rate of 1 C. This study demonstrates that the annealed soot with a unique graphitic multilayer structure has an electrochemical performance that renders it suitable as a candidate for the production of low-cost anode materials for use in LIBs. provide renewable energy. This is possible because graphite is often used as an active anode material in LIBs, and the soot generated by marine diesel engines is mostly composed of carbon and graphitic nanostructures. Artificial graphite for use in LIBs is produced by first obtaining a carbon precursor, carbonizing it, and then graphitizing it to increase its crystallinity. In the case of graphite reformed from waste soot, such as that used in this study, precursor generation and carbonization processes are performed in a combustion engine, and only the graphitization process needs to be carried out. This can make this method much more cost effective than other methods of producing artificial graphite are. In this study, soot samples collected from a marine diesel engine of an ocean-going vessel were analysed by high-resolution transmission electron microscopy (HRTEM), X-ray diffraction (XRD), Raman spectroscopy, and Brunauer-Emmett-Teller (BET) theory to investigate their structural characteristics. In order to improve the crystallinity of the soot to facilitate insertion of Li ions into the graphene layers, graphitization was conducted by annealing at 2700 °C. LIBs were manufactured using the annealed soot, and their electrochemical performances were evaluated to verify the possibility of using this material in anodes for LIBs. Experimental Material collection. Soot was collected from container ships currently in operation. Detailed specifications of the ship and its engine are shown in Tables 1 and 2. Note that soot can be generated from various machines in the engine room of the ship; in this study, we collected soot from the economizer, where the largest amount of soot accumulates. The schematics of the economizer and the specifications of the bunker fuel oil that is the precursor of the soot are shown in Fig. 1 and Table 3, respectively. Graphitization procedure. In the graphitization procedure, 10 g of soot was placed in an ultra-high temperature furnace (Thermvac Engineering, Korea) and heated to 2700 °C (to ~1800 °C at 10 °C/min, then to ~2400 °C at 5 °C/min, and finally to ~2700 °C at 3 °C/min). The soot was held at this temperature for 2 h under a flow of Ar gas (4 L/min). The furnace was allowed to cool naturally to ambient temperature, yielding the annealed soot. Carbon characterization. The morphology of the soot was investigated by transmission electron microscopy (TEM) (JEM-2100F; JEOL, Japan) at an acceleration voltage of 200 kV. XRD profiles (XRD, D8 Discover, BRUKER, German) were obtained using Cu K α (λ = 1.540598 nm) as a target in the 2θ range of 10-90° with a step size of 0.02° and a scan speed of 2° min −1 . Raman spectra were recorded using a Thermo Fisher Scientific Raman spectrometer (Thermo Fisher Scientific, USA) with a laser excitation wavelength of 532 nm. BET surface areas were calculated from N 2 adsorption-desorption isotherms obtained using a Quantachrome sorption analyser (Autosorb-1, USA). Prior to BET measurement, all samples were subjected to heat treatment for 2 h at 200 °C under N 2 to remove moisture. Thermogravimetric analysis (TGA) were performed using a TGA Q500 (TA Instrument, England) under atmosphere to determine the weight of the residue in the annealed soot. The elemental analysis of waste soot was carried using a Carbon Hydrogen Nitrogen Sulphur (CHNS) analyzer (Thermo Fisher Scientific, EA1112, USA) to determine the percentage composition of elements present in it. Electrochemical measurements. For the electrochemical evaluation of soot, an anode slurry was prepared by mixing soot (80 wt%), carbon black (10 wt%; Super P) as a conducting agent, and CMC/SBR (10 wt%) dissolved in distilled water as a binder. The slurry was coated onto Cu-foil substrates using a doctor blade coater and dried for 12 h at 50 °C under vacuum. CR2032-type coin cells were fabricated in a glovebox filled with Ar using Li coin chips as the counter and reference electrodes, Celgard 2400 as the separator, and 1 M LiPF 6 in Results and Discussion The morphology of the soot before and after graphitization was observed by TEM. The images show that the shape of the soot is typical of carbon black. The primary particles of the agglomerated soot ranged from 70-100 nm in size with a relatively regular size distribution, and aggregated in different directions to form inter-connected structures with chain-like morphologies. The TEM images show a significant change after heat treatment, with the soot changing to an amorphous graphite-like structure (Fig. 2). Prior to heat treatment, the raw soot showed a typical disordered amorphous structure. On the other hand, after treatment at 2700 °C, the layered packets were found to be parallel to the concentric direction, and a stiff, flat, lamellar plane around an irregular or hollow core with a diameter of ~20 nm was apparent throughout the sample. This indicates that the layers grew significantly with increasing heat treatment temperature (HTT) and changed to an almost perfectly crystalline graphite structure. This type of carbon is known as carbon nano-onions (CNOs); however, the diameters seen here are much larger than that generally observed for CNOs (20-30 nm). On the other hand, the TEM images also shows that the size of the carbon particle after the heat treatment is reduced. The waste soot contains a very large amount of hydrogen and sulfur before heat treatment (Table 4). These elements make the carbon structure very disordered structure. However, after the heat treatment at 2700 °C, these elements were not detected. For this reason, the structure of carbon changes from turbostratic structure to perfect graphite and thus d-spacing of graphite is also greatly reduced. Therefore, it is presumed that the degassing of hydrogen and sulphur, and the reduction of d-spacing will lead to a reduction in the size of carbon particles. XRD profiles were measured in order to confirm the change in crystallinity. The obtained XRD profiles (Fig. 3) were corrected for the background baseline and instrumental broadening to ensure accurate microstructural characterization. The average interlayer spacing was calculated from the corrected position of the (002) peak using Bragg's equation. The interlayer spacing changed from 0.350 to 0.338 nm after heat treatment, indicating that the turbostratic (fully disordered) soot structures converted into ordered structures. However, the interlayer spacing was still slightly larger than the theoretical value for crystalline graphite (0.3354 nm), indicating that the soot was not perfectly graphitized after heat treatment. The layer stacking height (L c ) was calculated from the (002) peaks using Scherrer's formula 27 . The L c increased from 9.57 to 16.64 nm after heat treatment, with the 2θ value corresponding to the (002) peak shifting from 25.38° to 26.25° and becoming narrower. This shows that the height of the stacked graphite layers increased as the so-called aromatic sheets were built and ordered. Furthermore, the basal plane length (L a ) was obtained from the (100) peaks using Scherrer's formula 27 . The L a increased from 19.71 to 35.1 nm after heat treatment, indicating noticeable ordering of the aromatic nanoclusters in the parallel direction. The observed increase in height (L c ) and the large increase in lateral crystallite dimension (L a ) indicate that the graphitized structure extended further in the direction of the plane than it was stacked in the direction perpendicular to the plane. On the other hand, the XRD profile shows that the soot contained various impurities before the heat treatment; however, most of these impurities disappeared after the heat treatment and only the carbon peaks remained. However, some small peaks appeared after heat treatment, presumably corresponding to Ni oxide, which is used as a desulfurization catalyst in bunker fuel oil. The content of NiO was analysed by TGA, and it was found that annealed soot contained a very small amount of NiO (Fig. 4). Raman spectroscopy was performed to study the crystalline features of the annealed soot in detail (Fig. 5). The most dominant and characteristic Raman features in graphitic materials are the so-called D band (~1350 cm −1 ), G band (~1582 cm −1 ), and 2D band (~2700 cm −1 ). The D band originates from the presence of disorder in sp 2 -hybridized carbon systems associated with graphene edges, and is therefore known as the disordered or defect mode 28 . The G band arises from the stretching of the C-C bond in graphitic materials, and is common to all sp 2 carbon-containing systems 28 . Thus, the ratio of the intensity of the G band to the D band (I D /I G ) is widely used to evaluate the crystal purity and defect concentration in graphitic materials 29 . The I D /I G ratio of the soot sharply decreased from 0.9 to 0.24 after heat treatment, and was inversely proportional to the in-plane dimension of the crystallites (L a ). In addition, the G band shifted toward a lower frequency (from ~1588 to ~1580 cm −1 , the theoretical value for graphite) after heat treatment. These results imply a high degree of graphitization, resulting in the graphitic order of the annealed soot, further supporting the conclusions drawn from the HRTEM and XRD results. On the other hand, the 2D band frequency is strongly influenced by the number of layers in the graphite. Interactions between stacked graphene layers tend to shift this band to higher frequencies 30 . The 2D band of the annealed soot appeared at 2700 cm −1 , indicating the presence of graphite. In addition, the shape of the 2D peak shows that high-quality graphite was formed because damaged graphene (or graphene oxide) yields very broad and low-intensity 2D peaks. The nitrogen adsorption/desorption isotherms for the annealed soot are displayed in Fig. 6a and the results summarized in Table 5. A linear BET range of 0.05-0.35 was used, and the BET surface area of the raw soot was calculated as 8.2 m 2 g −1 . Generally, the graphite used in LIBs is micron-scaled, and thus, its BET surface area is very low (less than 2 m 2 g −1 ). However, soot takes the form of carbon black, which has nanoscale primary particles that lead to a high specific surface area. The BET surface area of the annealed soot was calculated as 13.3 m 2 g −1 ; this increase in surface area is potentially due to the removal of hydrogen from soot during the annealing process. High surface areas (i.e., smaller anode particles) are beneficial for quicker charging of LIBs because it allows high conduction rates. However, it can also cause low 1 st cycle efficiency due to consumption of Li by the initial formation of solid electrolyte interphase (SEI) layers on the carbon surface, although this drawback could be overcome by using a prelithiation process 31 . Figure 6b Shows the pore size distribution (PSD) analyzed based on isotherm. The PSD shows that soot is a meso-macro hierarchical structure. This can be explained as follows. The primary soot particles are arranged into 100-300 nm agglomerates. The agglomerates are aggregated into a chained aggregate and a continuous pore network (>20 nm) is formed in the interstices. That is, as the soot primary particles agglomerate and aggregate into larger units of a few micrometers, they are creating an extensive porous network. A similar character of particle aggregation is observed for conductive carbon black such as Ketjen Black and Vulcan XC-72. This property shows that if the electric conductivity of soot is ensured by heat treatment., it can be fully utilized as a conductive material. This will be discussed in the electrochemical analysis part. Galvanostatic charge/discharge experiments were performed to evaluate the electrochemical performance of the annealed soot as an anode material for LIBs. Figure 7a shows the charge/discharge curves of the annealed soot over the first three cycles at rates of C/5, C/2, and C. The calculated reversible capacities were 282, 273, and 261 mAhg −1 , respectively, and the material exhibited excellent output characteristics as the C rate was increased. In general, the amount of energy that can be extracted from a battery decreases with increasing discharge current due to an increase in the internal impedance of the battery. However, the capacity fade was very low when the soot was used, potentially because Li ions could be inserted and removed more easily with increasing specific surface area and via the graphitic edges exposed to the outside through the graphitization process. Meanwhile, an irreversible capacity of about 70.9 mAhg −1 was observed owing to the large specific surface area for the annealed soot during the first charge/discharge process. However, after the third cycle, the Coulombic efficiency was >95%; this initial irreversible capacity could be significantly decreased by performing prelithiation during the actual production process. Furthermore, the soot exhibited a good cycle life and reversibility after long-term cycling over 150 cycles at a rate of 1 C (Fig. 7b). After 150 cycles, the soot anode still showed a specific reversible capacity of 260 mAhg −1 . In addition, the columbic efficiency of the waste soot electrode after the first 3 cycles kept around ∼99% until 150 cycles, suggesting the highly reversible Li+ insertion/extraction kinetics 32 . Cyclic voltammetry was performed to examine the reduction and oxidation peaks in the voltage range of 0.01-3.0 V (vs. Li/Li+) at a scan rate of 0.2 mV/s using the same workstation. Figure 7c exhibits the first three consecutive cyclic voltammetry (CV) curves of the annealed soot anode consisting of three distinct reduction peaks. The first peak located at 1.1 V could be assigned to the irreversible reduction of fluoroethylene carbonates (FEC). The second broad cathodic peak from 0.25-1.1 V corresponds to ethylene carbonate and dimethyl carbonate (EC/DEC) decomposition and the formation of a solid electrolyte interphase (SEI) layer. Finally, a sharp peak indicating the insertion of Li ion is observed at 0.25 V or less. After the first cycle, the cathodic reduction peaks disappear, and the CV curves nearly overlap without any obvious changes in the magnitude of the peak current or potential. This indicates the good reversibility of the Li insertion and extraction reactions and the cycling stability of the annealed soot anode 33 . In the anodic reaction, it shows that most lithium is delithiated at voltages below 1.5 V. These results show the potential of soot as a promising candidate for producing low cost anode materials for use in LIBs. The impedance measurements were carried out to study the resistance of the SEI film and charge transfer resistance at different cycling. In Fig. 7d, the experimental data (indicated by dots) and simulated data (indicated by line) for annealed soot is shown at different cycles. The impedance spectra observed consists of a semicircle at the high frequency end. Information regarding solution, surface film resistances can be obtained from the semicircle at the high frequency end 34 . The depressed nature of the semicircle can be attributed to the merging of the two different semicircles. One is due to the surface film and the other, is from the charge-transfer process. The diameter of the annealed soot semicircle is clearly small, which means that the resistance is very small. It maintains the same value after the 50th cycle, suggesting that the SEI film is stable and charge transfer resistance does not increase. This is consistent with the CV results, in which no further decomposition of the electrolyte occurred after the SEI was formed stably. The rate capability of the annealed soot is shown in Fig. 7e. The initial high capacity of 315 mAhg −1 was observed at a current density of 0.1 C after four discharge/charge cycles. The capacity of the annealed soot was measured to be 315, 297, 275, 251, 218 and 150 mAhg −1 when the current rate was consecutively set at the levels of 0.1 C, 0.2 C, 0.5 C, 1 C, 2 C and 5 C. As the current density was reduced to a low current, the capacity also recovered completely. A capacity of 320 mAhg −1 was detected in the 40th cycle when the current rate was returned to the value of 0.1 C. This result indicates that the structure of the annealed soot is stable at various current densities. Meanwhile, in order to find other commercial applications of waste soot, we used waste soot as a conductive material after heat treatment at 2000 °C for 2 h. Figure 7f shows a cycling performance by using artificial graphite as the anode active material and using the conventional commercial conductive material (Super P) and waste soot as the conductive material, respectively. The artificial graphite and Super P were all purchased from MTI Corporation. The slurry recipe and coating process were carried out completely with the procedures introduced in the experimental method. The performance of the two cells was not significantly different, indicating that the waste soot can be fully utilized as a conductive material by heat treatment at 2000 °C. These results show the potential of soot as a promising candidate for producing low cost anode materials and conductive material for use in LIBs. Conclusion This study represents the first attempt to recycle waste soot from ships into an active material for use in LIBs, which is a unique idea of utilizing waste for producing renewable energy. Although soot is generated from various machines on the ship, the soot used in this study was collected from the economizer as it generated the maximum quantity; this rendered it most suitable for potential mass production. The collected soot was graphitized through heat treatment at 2700 °C to enable its use as an anode active material. The morphology and structure of the obtained soot were investigated by HR-TEM, which revealed that the graphitized soot formed CNOs; however, these were larger than normal nano-onions. From the XRD, Raman spectroscopy, and BET surface area results, it was confirmed that the graphitized soot was converted into highly crystalline graphite, and the specific surface area of the graphitized soot was slightly higher than that generally used in active materials. The annealed soot with a unique graphitic multilayer structure had an electrochemical performance that rendered it suitable as a candidate for anode materials. In addition, it has a high reversible capacity and good cycling performance, which are critical for rechargeable LIBs. It will be necessary to carry out the same analysis and research for other types of soot emitted from ships in the future, and to conduct research to find various uses for waste soot.
2018-04-05T13:23:18.771Z
2018-04-04T00:00:00.000
{ "year": 2018, "sha1": "71931bd9151dbae6c35b0d19e04d512ddf9f1c4c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-23945-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "681603cd400817ced52dbd0c6d397248441d8a7b", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
118175444
pes2o/s2orc
v3-fos-license
Non-congruent Phase Transitions in Cosmic Matter and in the Laboratory Non-congruence appears to be the most general form of phase transition in cosmic matter and in the laboratory. In terrestrial applications noncongruencemeans coexistence of phases with different chemical composition in systems consisting of two (or more) chemical elements. It is just the case for all phase transitions in high-temperature chemically reactive mixtures, which are typical for uranium-bearing compounds in many nuclear energy devices, both contemporary and perspective. As for cosmic matter, most of real and hypothetical phase transitions without nuclear reactions, i.e., those in the interiors of giant planets (solar and extrasolar), those in brown dwarfs and other sub-stellar objects, as well as in the outer crust of compact stars, are very plausible candidates for such type of phase transformations. Two exotic phase transitions, the gas-liquid phase transition in dense nuclear matter and the quark-hadron transition occuring in the interior of compact stars as well as in high-energy heavy-ion collisions are under discussion as the most extreme example of hypothetical non-congruence for phase transformations in High Energy Density Matter. Introduction The term non-congruent phase transition (NCPT) denotes the situation of phase coexistence of two (or more) phases with different chemical compositions. This is a rather evident definition for the case of phase transitions (PT) in most of terrestrial applications (see below) and in astrophysical applications, where nuclear transformations, including β-decay, are negligible: PTs in planetary science and outer crust of compact stars etc. The nuclear composition in such situations is conserved and there is no problem with the selection of systems, which fulfill the condition of a NCPT. The situation is more complicated under extreme conditions like in the interiors (1) of compact stars and in remnants of supernova explosions, where nuclear transformations are close to equilibrium. The problem of the NCPT relevance is even more complicated in exotic situations with equilibrium hadron decay and quark deconfinement in interiors of strange (hybrid) stars and in the hypothetical quark-hadron (QH) phase transition in ultrarelativistic heavy-ion collisions at RHIC, LHC, FAIR and NICA. Hence the study of non-congruent phase transitions in typical terrestrial applications could be a useful base for understanding the relevance for such type of phase transitions in exotic situations like interiors of compact stars, supernova explosions, and in the hydrodynamic expansion of a fireball formed in heavy-ion collisions. General features of non-congruent phase transitions in chemically reactive plasmas Phase equilibrium in chemically reactive non-ideal plasmas of two or more chemical elements differs fundamentally from the case of ordinary phase equilibrium like, for example, the Van der Waals PT in substances with fixed chemical compositions (stoichiometry). Phase transitions in chemically reactive mixtures, including those in high-temperature uraniumbearing compounds, are typical for many nuclear energy devices both contemporary [23] and perspective [8,19]. The basic feature of such two-phase systems is their non-congruency, i.e. their ability to vary stoichiometries of coexisting phases without violating the stoichiometry for the whole twophase mixture. Non-congruency changes significantly the properties of all phase transitions in such systems, namely: (A) -The significant impact of the phase transformation dynamics, i.e. of the strong dependence of the phase transition parameters on the rapidity of the transition. This dependence is of primary importance in experiments with fast surface evaporation of condensed samples under the powerful laser heating or electron-beam energy deposition. The strong competition between diffusion and thermal conductivity processes determines the parameters of such non-congruent evaporation; (B) -The phase transition thermodynamics becomes more complicated. The essential changes include the scale of two-phase boundaries in extensive thermodynamic variables (say P -ρ etc) and even in topology of all twophase boundaries in the space of intensive thermodynamic variables, as well as properties and even nature of the singular points (critical point first of all) and appearance of additional end-points in NCPT. One of the most remarkable consequences of the non-congruency is the change of the general form of the two-phase boundary in the pressure-temperature plane (see Fig. 1 below). A two-dimensional "banana-like" region appears in the NCPT instead of the well-known one-dimensional P − T saturation curve for ordinary (congruent) PTs. A next remarkable property for a NCPT is that isothermal and isobaric crossovers of the two-phase region are no longer coincide. The isothermal NCPT starts and finishes at different pressures, while the isobaric NCPT starts and finishes at different temperatures [14]. This property is crucial for the interpretation of the NCPT relevance in the physics of compact stars and high-energy heavy-ion collisions. 3. Conditions of joint phase, chemical and ionization equilibrium 3.1. Equilibrium between macroscopic phases with neutral species Maxwell conditions Phase equilibrium conditions for two macroscopic phases are well known for the case when coexisting phases consist of arbitrary mixtures of neutral species with equilibrium chemical reactions. In accordance with chemical thermodynamics laws these conditions include conditions of equilibrium heat and impulse exchange (equality for pressures and temperatures: P ′ = P ′′ , T ′ = T ′′ ) and conditions of equilibrium matter exchange. The latter conditions have two variants for systems consisting of two or more chemical elements. The first one corresponds to partial equilibrium for exchange of matter with fixed chemical composition. This condition is equivalent to the well-known Maxwell "equal squares" construction for pressure-volume dependence in the case when both coexisting phases can be described by unique thermal equation of state (EOS) P (V, T ). For example, it is so for Van der Waals (gas-liquid) phase transition. More general is the well-known "double tangent" construction for two free energies, F ′ (V, T, x) and F ′′ (V, T, x) when both coexisting phases are described by different EOS-s. For example, it is so for crystal-fluid phase transition. In both the variants the final equilibrium condition corresponds to equality of Gibbs free energies of coexisting phases with fixed chemical composition: This form of phase equilibrium condition is often noted as "Maxwell condition" in astrophysical literature (for example [21]). Gibbs conditions The second variant corresponds to the total equilibrium in mean-phase matter exchange, i.e. equilibrium for exchange by each species with varying chemical composition of coexisting phases (x ′ = x ′′ ), but without violation of total chemical composition of whole two-phase system. This variant leads to not one, but several separate equalities for partial quantities -chemical In terrestrial applications this form of phase equilibrium conditions corresponds exactly to the definition of non-congruent phase transition. For the case of equilibrium chemical reactions in each phase total number of equalities for chemical potentials is decreased to reduced number of equalities for chemical potentials of basic (independent) species. For example, it is two basic units, oxygen and uranium chemical potentials µ O and µ U , in the case of equilibrium uranium-oxygen mixture (see below). In astrophysical application the form (2) is well known under the name "Gibbs conditions". The problem is that this form is applied there to charged species, but not only to neutral ones (see below). Phase equilibrium of macroscopic phases in presence of charged species (Gibbs -Guggenheim conditions) Phase equilibrium conditions for macroscopic phases with charged species are more complicated. There are two basic points. The first one is that electroneutrality conditions are added for both phases in (1,2). Maxwell conditions (1) are still valid for Gibbs free energies, G ′ and G ′′ , of electroneutral phases with chemical and ionization equilibrium inside. As for the Gibbs conditions (2), the point is that besides electroneutrality restrictions two additional quantities appear in description of coexisting phases and, correspondingly, in equilibrium conditions as additional independent variables. It is average electrostatic potentials, ϕ ′ (r) and ϕ ′′ (r) [5] (see for example [11]). As a result, a remarkable feature of any Coulomb system is the existence of two versions of chemical potential, µ i andμ i . The (ordinary) chemical potential, µ i (n k , T ), is presumed to be a local parameter depending on local density, temperature and composition. The new (generalized) electro-chemical potentialμ i is not local parameter. It strongly depends on non-local sources of influence, such as total charge disbalance including surface dipole, other surface properties etc. In uniform Coulomb systemμ i is equal to the sum of µ i (n k , T ) and average (bulk) electrostatic potential, ϕ, which is presumed to be uniform also. For each charged specie in Coulomb system the values of its chemical potentials in coexisting phases, µ ′ i and µ ′′ i , must not be equal under conditions of phase equilibrium. It is namely the electro-chemical potential, to have the same values in coexisting phases at phase equilibrium: This form of phase equilibrium conditions (3, 4) we will refer below as Gibbs-Guggenheim conditions. Equalities (3,4) being combined with the electroneutrality conditions leads to remarkable feature of any equilibrium Coulomb system, namely: every phase boundary in such system is accompanied, as a rule, by a finite gap in the average electrostatic potential through the phase interface [10,11]. In contrast to the work function this inter-phase (Galvani) potential drop △ϕ represents a thermodynamic quantity, which does depend on temperature and chemical composition only and does not depend on surface properties. This gap tends to zero at the critical point of gas-liquid phase transition. The zero-temperature limit of this drop (along the coexistence curve) can be considered as an individual thermo-electrophysical coefficient of any material. The value of discussed potential drop could be directly calculated by numerical modeling of phase transitions in the Coulomb system when both the coexisting phases being explicitly simulated [11]. It should be stressed that any phase transition in plasmas of one chemical element, for example evaporation in metals, must be forced-congruent in spite the fact that one (or both) coexisting phases being composed of two basic units: ions and electrons (all other species being their equilibrium bound complexes). It is electroneutrality conditions in both (macroscopic) phases that make this coexistence thermodynamically one-dimensional. On the contrary, this system became two-dimensional (and all phase transitions became non-congruent) just at the moment when we relax electroneutrality conditions in both phases and allow equilibrium mean-phase exchange by charged species also. This is just the case in so-called "mixed phase" scenarios (see below). Mesoscopic scenarios for phase equilibrium ("mixed phase" concept) There exists very popular and widely accepted scenario for phase transition, which differs essentially from the both described above ones. The basic idea, which was claimed in [20] and developed in [3] and many other papers (for example [4]), is that in many astrophysical situations a highly dispersive, uniform and heterogeneous mixture of charged micro-fragments of one phase in oppositely charged "see" of another one (charged emulsion) may be more thermodynamically favorable (i.e. not metastable, as it is in most of terrestrial applications (mist, foam etc.) but stable) than standard (Maxwell) form of forced-congruent coexistence of two electroneutral macroscopic phases. The simplest form of mixed-phase equilibrium conditions is equivalent exactly to equations (2) as if all charged species are equilibrated in meanphase exchange as well as neutrals species. In this simplest approximation all thermodynamic loss in such charged emulsion due to Coulomb energy of charge separation and positive contributions of surface tension are neglected. More sophisticated form of discussed mesoscopic scenario for phase coexistence ("structured mixed phase", see for example [16]) takes into account mentioned above thermodynamic loss due to surface tension and charge separation. It leads to existence of optimal size, form and charge for micro-fragments of both mixed phases in mentioned above charged emulsion ("pasta plasma"). The question of degree of equivalence for 'structured mixed phase' and non-congruent PT is open. See discussion below. Non-congruent evaporation in the uranium-oxygen system Development of wide-range equation of state(EOS) for uranium and uranium-bearing compounds with taking into account all phase transformations in such systems, was the subject of multi-annual theoretical study [19,23]. The physics of phase transitions in uranium dioxide (UO 2±x ) is of primary importance for prediction of behavior of nuclear reactors during hypothetical severe accidents [23]. In set of the works [9,14,15] an adequate theoretical model of non-congruent evaporation in U-O system was developed. The basic point of the model is the description of both coexisting fluid phases (liquid and vapor) in a uniform manner, as equilibrium multicomponent strongly interacting (non-ideal) mixtures of atoms, molecules, molecular and atomic ions, and electrons as well ("chemical picture", see e.g. [19]). Chemical reactions and ionization as well as the parameters of phase equilibrium have been calculated self-consistently by taking into account all significant non-ideality corrections (strong Coulomb interaction, intensive short-range repulsion and attraction) within modified version of thermodynamic theory. Details of the adopted approximations are described elsewhere [14,23]. The fluid model (common for liquid and vapor phases) has been applied for self-consistent calculations of non-congruent phase coexistence within the wide range of temperature and pressure (T ≤ 20 kK, P ≤ 2 GPa) including the vicinity of the true critical point of non-congruent PT. The basic point of these calculations is that the Gibbs conditions (2) have been used for all neutral species in both phases, while the Gibbs -Guggenheim conditions (3,4) have been used for all charged species, the conditions, which have been actually violated in all previous studies of evaporation in U-O system (for example [1]). The pressure-temperature phase diagram for non-congruent evaporation is shown in Fig. 1 as the most important result for present discussion. General nature of non-congruent phase coexistence in compounds and chemical mixtures Mentioned above long-time study of non-congruent phase equilibrium in U-O system [14,15] indicates that this type of phase transformation is not as infrequent at high temperatures as it was seen before. The main conclusion drawn from above results could be formulated in following statement: Any phase transition in equilibrium system containing two or more chemical elements must be non-congruent in general. Congruent phase transitions in such systems arise as exception only. This statement seems to be in evident contradiction with our everyday experience because one knows very many examples of PT-s in compounds of two (or more) chemical elements, for example, in simple water and other substances (H 2 O, CO 2 , NH 3 etc), where parameters of PT are studied exhaustively and nobody ever heard about non-congruence and banana-like P -T diagrams. Nevertheless, there is no contradiction. Gas-liquid PT in all these compounds are exceptions. All these PT-s are indeed congruent in room conditions because all of them conserve mono-molecular composition through the evaporation (H 2 O ⇀ ↽ H 2 O) and there is no any degree of freedom for two-phase system to change stoichiometry in liquid and/or vapor phases. But situation is absolutely different for PT-s in these compounds in planetary conditions (T ∼ 10-20 kK, P ∼ 1-10 Mbars). Expected nomenclature of PT-s in such conditions is very abundant (see for example [18]) while all discussed compounds are no more mono-molecular. Our present knowledge of parameters for these PT-s is very poor [6]. But qualitatively the main statement of present work is that any phase transition in these compounds in planetary conditions must be non-congruent, i.e. all P -T (or µ-T ) boundaries for phase transitions must be two-dimensional regions instead of ordinary one-dimensional curves [15]. Generally, the expected examples of non-congruent phase transitions in terrestrial applications are inter alia: • Uranium-and plutonium-bearing compounds: (PuO 2±x , UC, UN etc), • Evaporation in other oxides (for example, in SiO 2 ) • Evaporation in hydrides of metals (for example, in LiH) • Evaporation in ionic liquids and molten salts: (for example, in NaCl) • Evaporation in metallic alloys • Phase transitions in "dusty" and colloid plasmas: (Coulomb system of macro-and micro-ions with charge q M = +Z, q m = ±1) 6. Non-Congruence in cosmic matter Ordinary situations There exist many candidates for such type of phase transitions in cosmic matter without nuclear transformations: • Hypothetical plasma-and dissociation-driven phase transitions in mixture H 2 + He (+ H 2 O + NH 3 + CH 4 ) in interiors of giant planets (Jupiter, Saturn, Neptune etc), in brown dwarfs and in extra-solar planets [15], • Phase transitions in isentropically released products of strong shock compression of lunar ground (SiO 2 + FeO + Al 2 O 3 + CaO + ...) under huge impact after natural (meteorite) or artificial (LCROSS mission) bombarding • Crystallization and ionic demixing in interiors of white dwarfs, • Crystallization and ionic demixing in outer envelopes of compact stars (for example [7]). Non-congruence in exotic situations Relevance of non-congruent scenario for phase transition in exotic situations is not transparent. There exist many phase transitions, which could be considered as candidates for such transformations. Two groups of them will be commented here as the first ones: (I) Gas-liquid (Van der Waals-like) phase transition in dense nuclear matter of equilibrium mixture of p, n, e and nuclei {N (A, Z)}. Here {N (A, Z)} is equilibrium ensemble of all possible bound complexes from Z protons and (A − Z) neutrons (see [22] and reference therein). Several variants may be considered: with and without electrons, electroneutrality and Coulomb interaction, and with and without β-equilibrium). (II) Hypothetical phase transition(s) in the vicinity of quark deconfinement boundary at high temperature and with very complicated nomenclature of hypothetical phase transformation at relatively low temperature (for example [4]). [22,2]). The system is equivalent to chemically reacting mixture of two chemical elements. The symmetry parameter Y -is independent variable. It is equivalent to stoichiometry (chemical composition) in ordinary chemical mixtures. Hence, this GLPT is non-congruent in non-symmetric case (Y = 0.5) and congruent (i.e. aseotropic) in symmetric case (Y = 0.5). (I.c) GLPT in the same mixture as in (I.b) {p, n, e, N (A, Z)} in frames of simplest mesoscopic scenarios (simple "mixed phase"). No local electroneutrality, only global one. (Gibbs conditions (2)). This system is again equivalent to two-component (two-element) chemically reacting terrestrial mixture. Hence, this GLPT is non-congruent in general. P − T and µ − T phase boundary must be two-dimensional banana-like region instead of ordinary (VdW-like) saturation curve. (I.d) GLPT in the same mixture as in (I.c) in frames of advanced mesoscopic scenarios ("structured mixed phase" -"pasta plasmas"). The most complicated situation. This system is not equivalent to any terrestrial analog. Problem of congruence for such GLPT should be analyzed separately. Quark-hadron phase transition (II.a) Quark-hadron (QH) phase equilibrium (PT) between macroscopic quark-gluon and hadron phases is one-dimensional (thermodynamically) system. Phase transitions must obey to the Gibbs-Guggenheim conditions (3)(4)(5). Hence this variant of QHPT is equivalent to congruent PT, i.e. P -T and µ-T phase boundaries must be one-dimensional curves rather than two-dimensional stripes. It should be stressed that this variant of QHPT is not equivalent to VdW-like PT (like case I.b) by two reasons. First, this variant of QHPT is much closer to entropic type of PT (i.e. decreasing P -T coexisting curve, small density gap etc.) than to enthalpic one like VdW-PT (i.e. increasing P -T coexisting curve, large density gap etc.) [12] [13]. Second, presently considering versions of QHPT are described by separate analytic EOS-s for quark and hadron phases. Hence, there is no reason to expect appearance of critical point in such descriptions like it is in the case of crystal-fluid phase transition in terrestrial physics [17]. (II.b) QHPT in the same combination as in (II.a) in frames of simplest mesoscopic scenarios (simple "mixed phase"). No local electroneutrality, only global one. Gibbs conditions (2) are valid for all species, charged and neutral. Quark-hadron phase transition via "mixed-phase" scenario has the main features of non-congruent phase transitions: Isothermal transitions through the two-phase region starts and finishes at different pressures (and at different partial chemical potentials). This system is equivalent to two-dimensional (thermodynamically) system. Hence, this version of QHPT is non-congruent in general. P -T and µ-T phase boundaries are two-dimensional stripes rather than one-dimensional curves. (II.c) QHPT in the same combination as in (II.a) in frames of advanced mesoscopic scenario: "structured mixed phase" -"pasta plasma" (for exam-ple [16]). The most complicated situation. This system is not equivalent to any terrestrial analog. Problem of congruence or non-congruence for such QHPT should be analyzed separately. Acknowledgements The work was supported by the grants: INTAS-93-66, CRDF MO-0110, ISTC 3755 and by the RAS Scientific Program "Physics of matter under extreme conditions" and by the MIPT Education Center "Physics of high energy density matter". We acknowledge especially David Blaschke for great support and very useful discussions.
2010-05-23T09:56:32.000Z
2010-05-23T00:00:00.000
{ "year": 2010, "sha1": "798a70df8439d7d1220111bbdefcf85da98f02f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "798a70df8439d7d1220111bbdefcf85da98f02f7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240005511
pes2o/s2orc
v3-fos-license
The Choice Point Model of Acceptance and Commitment Therapy With Inpatient Substance Use and Co-occurring Populations: A Pilot Study Objectives: Acceptance and Commitment Therapy (ACT) is an empirically supported treatment which aims to enhance self-acceptance and a commitment to core values. The present study examined the effectiveness of the Choice Point model of ACT in a residential substance use disorder (SUD) setting. Choice Point is a contemporary approach to ACT and targets transdiagnostic processes. Methods: This uncontrolled quasi-experimental design assessed 47 participants taking part in Choice Point for Substances (CHOPS) in order to investigate its influence on psychological inflexibility, values-based action, and self-compassion over time. The study additionally assessed for sleeper effects and associations between transdiagnostic processes and warning signs of relapse. Results: Findings demonstrated a decrease in psychological inflexibility and increases in values-based action and self-compassion over time. Gains were maintained at follow-up, and sleeper effects were observed for psychological inflexibility and mindfulness. Correlational analysis suggested that all transdiagnostic processes were related to warning signs of relapse at follow-up. Conclusion: These results provide preliminary evidence for the feasibility, acceptability, and effectiveness of CHOPS for SUD. Observed sleeper effects in psychological inflexibility and mindfulness indicate that CHOPS may provide longer-term benefits critical to a population where relapse is common. While encouraging, these findings should be interpreted with caution. Future research should utilize comparison groups when investigating CHOPS. INTRODUCTION According to the Substance Abuse and Mental Health Service Administration (SAMHSA), substance use disorder (SUD) is a nationwide epidemic with approximately 21.5 million adolescents and adults meeting diagnostic criteria (Center for Behavioral Health Statistics and Quality, 2015). The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) defines SUD as the act of continuing to engage in substance use despite negative effects to cognitive, behavioral, and physiological functioning (American Psychiatric Association, 2013). SUD has been shown to impact social, personal, and occupational wellbeing while also contributing to disease, elevated crime rates, and loss of productivity (American Psychiatric Association, 2013;Center for Behavioral Health Statistics and Quality, 2015). A multitude of factors contribute to the etiology of SUD including psychological, social learning, social-situational influences, and biological predisposition (Witkiewitz et al., 2014;Smith, 2021). These factors are both internal (i.e., personality and affective experience) and external (i.e., familial and peer interaction) in nature and impact the course of SUD (Witkiewitz et al., 2014). Nearly 8 million Americans are also identified as having one or more co-occurring mental health disorders, further contributing to relapse (Center for Behavioral Health Statistics and Quality, 2015; Ii et al., 2019). This suggests that one-third of individuals with SUD also present with comorbidities, such as depression, anxiety, stress, and posttraumatic stress disorder (PTSD; Hermann et al., 2016;Svanberg et al., 2017). Innovative approaches capable of concurrently targeting multiple diagnoses across varied life domains are needed (Roos et al., 2017). Traditionally, SUD is treated with evidence-based practices, such as cognitive behavioral therapy (CBT), motivational interviewing (MI), and contingency management (CM; Lee et al., 2015). While established protocols have shown to be efficacious for SUD treatment, 30 to 50% of individuals remain abstinent for only short periods of time (Lee et al., 2015;Ii et al., 2019). Due to limitations treating chronic and comorbid presentations, established protocols may not be best suited for long-term SUD treatment (Clarke et al., 2014;Lee et al., 2015). Transdiagnostic approaches which target processes existing across disorders are warranted (Ii et al., 2019). BACKGROUND Acceptance and Commitment Therapy (ACT) is a transdiagnostic treatment which has increasingly gained in interest over the past few years. Unlike prevailing mechanistic approaches, ACT has its foundations grounded in functional contextualism (Hayes, 2004;Hayes et al., 2012). Contextual approaches, or processbased therapies, examine the way in which differing contexts affect the function of behavior (Harris, 2019;Hofmann and Hayes, 2019). While a primary aim of traditional CBT is to alter the content of thought, ACT works to modify one's relationship with private events through acceptance and change processes (Zhang et al., 2018;Hofmann and Hayes, 2019). ACT's primary goal is to increase psychological flexibility, or the act of being present with aversive stimuli, while remaining committed to actions consistent with core values (Dindo et al., 2017). This transdiagnostic process is strengthened using six core processes: mindfulness, acceptance, self-as-context, cognitive defusion, committed action, and values Zhang et al., 2018). Together, all six processes work in concert to increase psychological flexibility, which studies indicate may be more effective than symptom reduction approaches (Stotts and Northrup, 2015;Dindo et al., 2017). ACT theory purports that psychological inflexibility, or responding narrowly to internal states, helps develop and maintain substance use and mental health disorders (Levin et al., 2014). Pervasive patterns of emotional and cognitive avoidance, a primary contributor to psychological inflexibility, restrict values-consistent choices and paradoxically increase unwanted private events (Levin et al., 2014). This experiential avoidance is also transdiagnostic, resulting in avoidance of cravings and post-acute withdrawal symptoms which potentially further drug use, relapse, and a neglecting of values (Levin et al., 2014;Stotts and Northrup, 2015). By directly targeting experiential avoidance and psychological inflexibility, ACT aims to alter maladaptive escape strategies, promote experiential acceptance, and create greater flexibility in decision making Dindo et al., 2017). Because psychological inflexibility underlies a variety of disorders, targeting this transdiagnostic process may be critical for creating lasting behavior change in SUD and co-occurring populations. Empirical Support for ACT and SUD ACT is recognized by the SAHMSA and the American Psychological Association (APA) as an empirically supported treatment for SUD, depression, mixed anxiety, obsessive compulsive disorder (OCD), and chronic pain (Stotts and Northrup, 2015;Dindo et al., 2017). Since its inception, there have been over 325 randomized controlled trials using ACT (Gloster et al., 2020). With regards to SUD, ACT was shown to be effective for the treatment of opioid use disorder, cannabis dependency, alcohol use disorder, and nicotine dependency (Luoma et al., 2012). Studies conducted by Shorey et al. (2017) and demonstrated the benefit of targeting transdiagnostic processes, such as experiential avoidance, for SUD. showed that participants who failed to respond to a traditional CM intervention exhibited higher levels of experiential avoidance. Because no differences were found in the severity of negative affect, impulsivity, or cravings between responders and non-responders, experiential avoidance was presumed the main mediating factor . Shorey et al. (2017) found that higher experiential avoidance was significantly related to drug and alcohol cravings in an SUD residential setting. A number of meta-analyses have also compared ACT to traditional CBT for SUD. Ruiz (2012) found that ACT outperformed all cognitively focused CBT interventions and was potentially more effective at treating co-occurring depression, anxiety, eating disorders, and emotional disorders. A second meta-analysis comparing ACT to alternative treatments for substance use found that ACT was at least as effective as traditional CBT, nicotine replacement therapy, and 12-step Frontiers in Psychology | www.frontiersin.org 3 October 2021 | Volume 12 | Article 758356 approaches, however, was better able to maintain and improve upon abstinence compared to each intervention (Lee et al., 2015). A recent review of ACT meta-analyses was performed and indicated that effect sizes favored ACT for SUD over all other control groups (Gloster et al., 2020). There is a paucity of research examining transdiagnostic processes and ACT for co-occurring disorders. Meyer et al. (2018) found that an ACT-based intervention significantly reduced comorbid PTSD and alcohol symptoms which were maintained at follow-up. Additional decreases in functional disability, depressive symptoms, and suicidal ideation were also maintained, while symptom changes were associated with reductions in psychological inflexibility and experiential avoidance (Meyer et al., 2018). Similarly, Levin et al. (2014) found that psychological inflexibility was more strongly related to SUD with depression and anxiety than SUD alone, highlighting the significance of targeting psychological inflexibility in co-occurring disorders (Levin et al., 2014). Additional pilot studies include an investigation by Thekiso et al. (2015) who compared ACT with treatment as usual (TAU) for alcohol use disorder and comorbid affective disorders in a hospital setting. The ACT condition demonstrated significant improvements compared to TAU including increased abstinence from alcohol, fewer depression and anxiety symptoms, and reduced cravings at follow-up. Another study by Heffner et al. (2015) examined the effectiveness of an ACT smoking cessation group for nicotine dependency and co-occurring bipolar disorder. Researchers found that a 50% increase in acceptance was associated with a 51% increase in abstinence and that at least half of participants demonstrated a 50-60% reduction in frequency of smoking. Where therapy outcomes typically deteriorate with time, the opposite has been observed in several ACT studies. This unique ability to maintain outcomes at follow-up while continuing to exhibit therapeutic benefits has been labeled the sleeper effect (Lee et al., 2015). Luoma et al. (2012) demonstrated a similar sleeper effect when comparing ACT with traditional CBT for the treatment of shame in a residential SUD setting. Continuous treatment gains were observed in the areas of shame, substance use, and treatment adherence across the study and at follow-up (Luoma et al., 2012). In a population where relapse is common, interventions capable of building upon therapeutic gains are needed. Additional investigations into the relationship between transdiagnostic processes and warning signs of relapse may also prove beneficial as warning signs are a significant predictor of future substance use (Miller and Harris, 2000). ACT and Self-Compassion It is intuitive that self-compassion be applied to SUD as substances are often used to avoid shame and self-criticism, while self-compassion targets the biological threat system which gives rise to both (Gilbert, 2014;Luoma et al., 2019). Selfcompassion was described by Neff and Tirch (2013) as a combination of self-kindness, common humanity, and mindfulness. Research investigating self-compassion for SUD is in its early stages. One investigation demonstrated the way in which patients at a residential SUD facility increased self-compassion while reducing guilt and shame after a 4-week self-compassion intervention (Held et al., 2018). Phelps et al. (2018) found that lower self-compassion was associated with a higher risk of SUD, while Platt et al. (2018) indicated that self-compassion interventions produced speedier reductions of daily cigarette smoking. As a treatment approach aiming to foster self-acceptance, perspective taking, and mindfulness, ACT may be particularly well suited for developing self-compassion (Yadavaia et al., 2014). First, by building an awareness of the observer self, a self which mindfully observes the occurrence of private events, contextual changes make it possible to relate to the self in a kinder, more compassionate way Yadavaia et al., 2014). According to Relational Frame Theory (RFT), a science of language and cognition underlying ACT, deictic relational framing allows for this perceptual shift to occur (Neff and Tirch, 2013;Yadavaia et al., 2014). Deictic framing can be defined as a way in which human language helps foster a sense of self and perspective taking through derived relationships with others . Through deictic framing, relationships are derived between I/you, here/there, and now/then. When applied interpersonally, deictic framing creates the context in which common humanity functions. When applied internally, deictic framing allows for intrapersonal shifts in context, which are necessary for responding to private events with compassion. Second, ACT promotes acceptance of internal states while committing to values-consistent decision making. This psychologically flexible state is inherently self-compassionate as it encourages mindfulness, self-kindness, and movement toward universal values (Neff and Tirch, 2013;Yadavaia et al., 2014;Ong et al., 2019). While limited in scope, research has shown ACT to be an efficacious intervention for enhancing self-compassion. Yadavaia et al. (2014) found that after only three workshops, ACT significantly increased self-compassion at post-treatment and follow-up. Effect sizes were comparable to traditional self-compassion protocols, but the ACT intervention was shorter in duration (Yadavaia et al., 2014). The Choice Point Model of ACT The primary aim of Choice Point is to minimize narrow or inflexible behavior by increasing values-consistent choices (Harris, 2017). This is accomplished through building an awareness of choice points, or moments in time when a person is faced with making life choices that are values-consistent or valuesinconsistent. Through increased awareness of choice points, individuals are better able to lessen reactivity to internal states, allowing for enhanced flexibility and committed action (Harris, 2017). Identifying choice points may also have the added benefit of strengthening resilience (Gervis and Goldman, 2020). Where traditional ACT utilizes six core processes to increase psychological flexibility and reduce experiential avoidance, Choice Point aims to simplify this approach and create a user-friendly experience (Ciarrochi et al., 2013). Using middle-level terms, such as toward moves, away moves, hooks, values, and choice points, the Choice Point model provides a conceptual overview which is easy for patient consumption (Harris, 2019 (Harris, 2017). This allows for the model to target a broader range of inflexible behaviors regardless of appetitive or aversive control (Harris, 2017). Second, building awareness of choice points in order to increase values-consistent behaviors is unique to the Choice Point approach (Ciarrochi et al., 2013). Third, the Choice Point model overtly identifies self-compassion as a value, which is not typical of traditional ACT (Ciarrochi et al., 2013). Choice Point Applied to Substance Use Disorder SUD has a multitude of factors contributing to its complexity including psychological, social learning, and biological predispositions (Witkiewitz et al., 2014). Within the context of these proximal and distal factors, traditional ACT and the Choice Point model both help individuals move toward values at times when committed action is difficult. However, Choice Point ACT may be particularly well suited for SUD because of the way in which it targets both external and internal factors contributing to the development and reinforcement of the disorder. Individuals often come under appetitive control when reproducing rewarding stimuli, while falling under aversive control when avoiding unpleasant stimuli (Wilson, 2009). By increasing opportunities for values-consistent choices, choice point awareness may disrupt external reinforcement, such as social learning or maladaptive pleasure-seeking behaviors, and instead enhance values-driven appetitive control. Additionally, choice point awareness may disrupt experiential avoidance of drug cravings, emotional pain, and other internal private events specific to the individual. Utilizing choice points to alter both external reinforcement and internal avoidance patterns allows for broader, more flexible behavioral repertoires for those with SUD (Harris, 2017). This is perhaps the most significant benefit of applying Choice Point as an alternative to standard ACT for SUD. Aim of the Present Study This pilot study aims to determine the effectiveness of Choice Point for Substances (CHOPS) at influencing transdiagnostic processes in an inpatient SUD setting. CHOPS is a manualized approach to the Choice Point model of ACT specifically for SUD (Berman, 2017, Unpublished Manual). Like the Choice Point model, CHOPS simplifies ACT middle-level terms, builds awareness of choice points, and directly targets self-compassion as a value. Therapeutic activities, group-format, session length, and session frequency were all tailored for use in an inpatient setting. It was hypothesized that 16 sessions of CHOPS would impact transdiagnostic processes in three ways: (a) psychological inflexibility would reduce over time, while values-based action and self-compassion would increase over time, (b) sleeper effects would be observed for psychological inflexibility, valuesbased action, and self-compassion, and (c) psychological inflexibility would be positively associated with warning signs of relapse, while values-based action and self-compassion would be negatively related to relapse signs at follow-up. To our knowledge, this is the first investigation into the effectiveness of a manualized approach to the Choice Point model in a residential SUD setting. Participants Recruitment occurred at a 30-day residential SUD treatment facility located in Pennsylvania. A total of 115 participants initially signed consent to take part in the study. Due to high attrition rates resulting mainly from insurance denials, employment obligations, and family needs, 59 participants (51%) left the facility prior to treatment completion. High attrition is common in SUD settings, and rates were comparable with those found in previous studies (Roseborough et al., 2015;Dindo et al., 2017;Svanberg et al., 2017). Seven additional participants (6%) withdrew from the study citing a desire to participate in TAU instead of the study group. Two participants (1.7%) left the facility against medical advice (AMA). One of these participants left prior to intervention involvement, while the other left shortly after participation began. These data suggest that while attrition was generally high, only 6% of participant decidedly withdrew from the study. Additionally only 1% of active participants left treatment AMA which was discovered to be well below facility AMA rates. All participants were between 18 and 66 years of age with a mean age range of 18-34 years old (see Table 1). There were a larger number of male participants (63.8%) than female participants (36.2%) who completed the intervention (N = 47). Each participant met criteria for one or more SUDs. Of the 47 participants who completed the study, almost half (48.9%) self-reported alcohol as their primary used substance. Opioid use (25.5%), polysubstance use (17.0%), stimulant use (4.3%), and anxiolytic/hallucinogenic use (4.2%) were also reported as primary reasons for admission. The majority of participants (93.6%) additionally self-identified as having one or more co-occurring mental health disorder. Anxiety (12.8%), depression (6.4%), and chronic pain (2.1%) were reported as main co-occurring disorders among participants. The highest prevalence of co-occurring disorders presented as anxiety with depression (57.4%) or a combination of anxiety, depression, and chronic pain (14.9%). Procedure This study was approved by the Lancaster General Health Institutional Review Board (IRB). Participants took part in a 16-session manualized group intervention over the course of 4 weeks. Sessions took place four times per week and were each 1 h in length. Closed groups were conducted over a 9-month period with 3-month subsequent follow-up assessments. One primary facilitator (Bachelor's level clinician) implemented the manual while a secondary group leader acted as an ancillary therapist. On one occasion, a third clinician facilitated the group when both primary and secondary clinicians were unavailable. In order to enhance study fidelity, each clinician Frontiers in Psychology | www.frontiersin.org 5 October 2021 | Volume 12 | Article 758356 received direct manual training with a doctoral level psychologist specializing in ACT. Study fidelity was further strengthened using an intervention checklist in order to rate facilitators during each group session and assure accurate manual implementation. The intervention checklist is an unstandardized checklist developed for this study in order to assess facilitators in seven main areas of focus: (1) group start time, (2) review of middle-level terms, (3) psychoeducation, (4) appropriate implementation of therapeutic activity, (5) processing of activity, (6) clinician engagement and enthusiasm, and (7) preparation and knowledge regarding the material. Checklist ratings indicated that facilitators adhered to the protocol, were competent in their implementation, and were free of therapy contamination. All eligible patients were provided the opportunity for study participation during admission. Inclusion criteria required participants be English speaking, admitted as an inpatient resident, and 18 years of age or older. Patients were excluded if they presented with significant cognitive impairment, or previously attended Choice Point groups. New patients were recruited in 4-week intervals. Those meeting inclusion criteria were invited to attend an informed consent meeting where they were educated about the purpose of the study, as well as the risks and benefits to their participation. At the informed consent meeting, all attendees were greeted with incentives limited to food and refreshments. No additional incentives were provided. Informed consent and private health information (PHI) forms were reviewed orally and participants provided written consent. A demographics form and three assessment measures were also completed by participants. Those unable to attend due to scheduling conflicts met with researchers individually. Precautions to confidentiality were taken including de-identifying subject names and securing documentation behind multiple locked doors. Thirty-one participants provided 3-month follow-up data which were obtained through phone, email, Internet, and in-person collection. When corresponding through email, participants received a link for completing assessments through Survey Monkey. Those who were reached by phone were provided the option to complete assessments through telecommunication or to have hard copies mailed to their home. If the participant was unable to be reached by email or phone, a voice message was left when permitted. Participants CHOPS incorporated self-compassion exercises with the aim of increasing covert self-compassion and overt self-compassion. Covert self-compassion was defined as the process of selfacceptance, self-validation, and self-kindness occurring intrinsically during times of psychological flexibility. Overt self-compassion was defined as a purposeful act of valuing self-compassion while taking steps in pursuit of that value. Covert self-compassion was cultivated through the use of mindfulness interventions, acceptance exercises, and a commitment to core values. Overt self-compassion was targeted using the Choice Point diagram to identify compassionfocused values. Additionally, self-compassion exercises were utilized for the purpose of fostering shifts in perspective taking. Participants imagined speaking compassionately to a friend, followed by a shift in perspective toward the self. Altering between I/you and here/there perspectives aimed to strengthen deictic framing processes necessary for self-compassion. Manual Implementation Session 1 was an introduction to the Choice Point model. Participants were exposed to middle-level terms, a Choice Point diagram, and an introductory video. Sessions 2 and 3 helped participants to clarify values while labeling internal and external obstacles common to SUD (referred to as hooks). Sessions 4 through 6 provided psychoeducation about mindfulness skills and how to apply those skills to cravings, affective hooks, and making wise choices. Specifically, participants took part in mindfulness of breath, mindful hook sorting tasks, and a tin can monster meditation. Sessions 7 and 8 educated participants about toward/away moves using the bus metaphor and patterns of experiential avoidance using an avoidance loop activity. These demonstrated the manner in which habitual avoidance of cravings, unwanted affect, and cognition negatively impacts values-based decision making. Session 9 saw participants identifying choice points by categorizing values, toward moves, away moves, and hooks into Choice Point diagrams. Sessions 10 and 11 introduced cognitive defusion skills where participants practiced mindfully unhooking from the content of thought while concurrently observing the process of thought (observer self). Session 12 aimed to help participants practically apply choice points through taking BOLD action. Participants practiced breathing slowly, observing internal and external experience, listening to their values, and deciding to make values-consistent choices. Sessions 13 through 16 helped participants identify variations in selfcompassion while sorting them into Choice Point diagrams. Participants additionally engaged in experiential exercises meant to strengthen deictic framing and self-compassion through perspective taking. Measures Participants completed four separate assessment measures: Acceptance and Action Questionnaire-II (AAQ-II), Valued Living Questionnaire (VLQ), Self-Compassion Scale (SCS), and the Advanced Warning of Relapse Questionnaire-Revised (AWARE). Because a large number of participants were expected to present with co-occurring mental health disorders, the Acceptance and Action Questionnaire-Substance Abuse (AAQ-SA) was not chosen due to its craving-specific items. The AAQ-II, VLQ, and SCS were administered before Session 1, after Session 8, after Session 16, and again at 3-month follow-up. AWARE was also administered at 3-month follow-up. Feasibility and acceptability were measured by assessing treatment adherence, therapeutic outcomes, recruitment success, and selfreported patient satisfaction. Acceptance and Action Questionnaire-II The AAQ-II is a 7-item self-report measure of psychological inflexibility and experiential avoidance (Miron et al., 2015). Higher scores on the 7-point Likert scale are indicative of greater psychological inflexibility and experiential avoidance (Dixon et al., 2016). The AAQ-II has good internal consistency (α = 0.84), appropriate discriminative validity, and strong testretest reliability (Bond et al., 2011;Miron et al., 2015). Valued Living Questionnaire The VLQ is a two-part questionnaire with 10 items in each section. Part one assesses values importance and asks individuals to rate the importance of values in 10 specific valued-life domains (Wilson et al., 2010). Part two assesses the consistency with which values-consistent actions occurred during the past week. Both assessment sections are scored on a 10-point Likert scale and are used to calculate a valued living composite score. Higher composite scores are representative of increased valuesbased action. The Importance and Consistency subscales showed good (α = 0.83) and adequate (α = 0.60) internal consistency. The Valued Living composite also demonstrated adequate internal Frontiers in Psychology | www.frontiersin.org 7 October 2021 | Volume 12 | Article 758356 consistency (α = 0.74). The VLQ is correlated with measures of depression and experiential avoidance and displays adequate internal and temporal consistency (Wilson et al., 2010;Dixon et al., 2016). Self-Compassion Scale The SCS is a 26-item scale which measures trait levels of self-compassion (Neff, 2015). Scored on a 5-point Likert scale, the SCS has demonstrated good overall psychometrics and creates a self-compassion total score by calculating the mean of six subscale mean scores. Subscales include self-kindness, self-judgment, common humanity, isolation, mindfulness, and over-identification (Neff, 2015;Neff et al., 2017). The SCS demonstrated excellent internal consistency (α = 0.92), good convergent and discriminate validity, and strong predictive validity (Neff, 2003(Neff, , 2015. Advanced Warning of Relapse Questionnaire-Revised AWARE has 28 items and is scored on a 7-point Likert scale. The self-report measure assesses warning signs of alcohol relapse, with higher scores indicative of greater relapse signs (Miller et al., 1996). AWARE has been shown to be a good predictor of relapse occurrences. It has demonstrated excellent internal consistency (α = 0.92) and good test-retest reliability (Miller and Harris, 2000). For purposes of this study, phrasing was refined to represent signs of relapse more generally rather than alcohol-specific relapse. Design Using a quasi-experimental design, the current study aimed to examine the effectiveness of CHOPS in an inpatient SUD setting. A priori power analysis was performed using G*Power in order to calculate sample size and power for repeated measures ANOVA, dependent t-test, and Pearson r correlational analyses (Faul et al., 2007). Analyses indicated that sample sizes consisting of 28, 34, and 64 participants, respectively, were required for 80% power. The main analysis exhibited adequate power while follow-up analyses including t-tests and correlational analysis were underpowered. Forty-seven participants located in an inpatient SUD facility completed the 16-session group intervention. A one-way repeated measures ANOVA was performed on multiple occurrences in order to assess change in psychological inflexibility, values-based action, and self-compassion over three points in time (pre-treatment, mid-treatment, and post-treatment). Pre-treatment data were collected prior to Session 1, mid-treatment data after Session 8, and post-treatment data after Session 16. Paired sample t-tests were also performed comparing baseline functioning with 3-month follow-up (n = 30) and post-treatment functioning with 3-month follow-up (n = 20) for transdiagnostic processes. Due to the frequency of early discharge, participants who completed a minimum of 8 sessions were included in follow-up data resulting in varied samples sizes. Twenty-nine participants (n = 29) were additionally assessed using a bivariate correlational analysis to determine the extent to which warning signs of relapse were related to psychological inflexibility, values-based action, and self-compassion at 3-month follow-up. Because group attendance was the primary measure of treatment adherence, attendance records were utilized to assess treatment adherence and its relationship with psychological inflexibility, values-based action, and self-compassion post-treatment. Missing data were not present during pre-treatment, mid-treatment, or post-treatment assessments; however, incomplete follow-up data were analyzed using Listwise deletion. Utilizing this analysis contributed to additional variations in sample sizes for t-tests and correlational analyses. Main Analysis: Within-Group Comparisons Three independent repeated measures ANOVA was performed on psychological inflexibility, values-based action, and selfcompassion (see Table 2). The Shapiro-Wilk test in combination with p-plot observations was used to test for normality and indicated that all three variables were normally distributed, p > 0.05. Three significant main effects were observed with a large effect size for each variable. A decrease in psychological inflexibility, F(2, 92) = 29.89, p < 0.001, η 2 = 0.39, was seen across the intervention over time. The Huynh-Feldt correction was used for the VLQ due to violating sphericity, χ 2 (2) = 10.81, p = 0.004, and indicated a significant increase in values-based action, F(1.70, 78.27) = 74.05, p < 0.001, η 2 = 0.62. A main effect in self-compassion, F(2, 92) = 28.21, p < 0.001, η 2 = 0.38, was also observed over time. All VLQ and SCS subscales were significant, p < 0.001 (see Table 2). The Bonferroni method was employed to compare means across levels of the intervention (see Table 2). Findings demonstrated decreases in psychological inflexibility and increases in values-based action and self-compassion when comparing pre-treatment with mid-treatment means (p < 0.001) and pre-treatment with post-treatment means (p < 0.001). Mid-treatment and post-treatment comparisons were also significant for psychological inflexibility (p = 0.007), values-based action (p < 0.001), and self-compassion (p = 0.001). Sleeper Effect Paired sample t-tests were also performed comparing post-treatment and follow-up to determine if any sleeper effects occurred (see Table 4). Findings indicated significant improvements in psychological inflexibility, t(19) = 3.29, p = 0.004, d = 0.74, from posttreatment to follow-up. Overall self-compassion was not found to Table 5). Relapse Prevention A bivariate correlational analysis was performed to determine the extent which psychological inflexibility, values-based action, and self-compassion were related to warning signs of relapse at follow-up. Results showed a significant association between warning signs of relapse and transdiagnostic processes, p < 0.01. Self-compassion and psychological inflexibility exhibited the strongest associations with self-compassion demonstrating an inverse relationship with warning signs of relapse, r(27) = −0.68, p < 0.001, and psychological inflexibility showing a positive relationship with warning signs of relapse, r(27) = 0.66, p < 0.001. Values-based action was also negatively associated with warning signs of relapse r(27) = −0.58, p = 0.001. SCS and VLQ subscales, with the exception of the importance subscale, were significant, p < 0.05 (see Table 6). Treatment Adherence Attendance was analyzed as a measure of treatment adherence, intervention feasibility, and intervention acceptability. A bivariate correlational analysis was performed in order to determine Table 5). DISCUSSION The present study aimed to assess the effectiveness of CHOPS protocol at improving psychological inflexibility, values-based action, and self-compassion in a residential SUD setting. Also examined was the extent to which warning signs of relapse were associated with all three transdiagnostic processes at 3-month follow-up. Further, data were analyzed to determine the relationship between treatment adherence and psychological inflexibility, values-based action, and self-compassion post-treatment. Established treatment approaches for SUD have demonstrated limited short-term and long-term success (Ii et al., 2019). However, present findings showed significant overall improvements in psychological inflexibility, values-based action, and self-compassion indicating that participants were more willing to experience unwanted internal events, engage in actions consistent with values, and treat themselves more compassionately post-treatment. In other words, participants were more accepting of thoughts and feelings, made choices consistent with values, and demonstrated self-kindness, mindfulness, and connectedness. Significant gains occurred across all levels of the intervention indicating that benefits initially began within the first 2 weeks of treatment followed by continued progression throughout the intervention. When comparing pre-treatment, mid-treatment, and post-treatment means, large effect sizes were observed for each transdiagnostic variable with values-based action exhibiting the largest effect size. Established SUD interventions traditionally display small effect sizes which are short in duration (Lee et al., 2015). Present outcomes support findings indicating superior effect sizes for ACT models compared to established protocols and suggests that CHOPS may also be an effective alternative for SUD. It should be noted that CHOPS assessed transdiagnostic processes as opposed to abstinence rates make comparisons between modalities difficult (Lee et al., 2015). Previous studies additionally indicate that ACT interventions are prone to incubation effects where therapeutic benefits are maintained at follow-up . The present study builds upon these findings and suggests that CHOPS demonstrated similar therapeutic gains which were maintained at follow-up compared to both baseline and post-treatment. Effect sizes were largest when comparing baseline and follow-up means. Developing interventions capable of maintaining treatment gains is particularly important in a population where relapse is common. Recent ACT literature has developed an interest in determining whether ACT interventions also create a longer-term sleeper effect, where therapeutic gains are not only maintained at follow-up but improved upon after therapy completion (Lee et al., 2015). Consistent with Lee et al. (2015), current findings indicated a sleeper effect for psychological inflexibility, suggesting continued benefits at follow-up beyond post-treatment gains. Table 5). Frontiers in Psychology | www.frontiersin.org 10 October 2021 | Volume 12 | Article 758356 TABLE 6 | Descriptive statistics and correlations for warning signs of relapse, psychological inflexibility, values-based action, and self-compassion. Measure An additional sleeper effect was also observed for mindfulness, a component of self-compassion, indicating that mindful awareness continued increasing after treatment conclusion. Sleeper effects were not observed for values-based action or total self-compassion as were originally expected. Together, these findings indicate that Choice Point ACT may result in more robust outcomes than established protocols. Because SUD relapse is commonplace, an investigation into long-term benefits of targeting transdiagnostic processes is warranted. Some meta-analyses have found that ACT better maintained abstinence at follow-up when compared to established SUD protocols (Lee et al., 2015). The present study adds to these findings by assessing the relationship between transdiagnostic processes and warning signs of relapse at follow-up. Findings indicate that self-compassion and psychological inflexibility both demonstrated a strong relationship with warning signs of relapse. Participants who reported greater self-compassion indicated fewer relapse signs, while those reporting increased psychological inflexibility indicated greater relapse signs at follow-up. Values-based action and warning signs of relapse were also strongly related at follow-up suggesting that those taking actionable steps toward values additionally exhibited fewer relapse signs. These findings add to a growing body of ACT literature and suggest that Choice Point ACT may also result in better long-term abstinence rates than established protocols for SUD. Further, increasing psychological flexibility, values-based action, and self-compassion have the potential to reduce relapse rates in the long term (Lanza et al., 2014). Attendance frequency was examined as a measure of treatment adherence. Those with SUD or chronic mental health disorders are 50% less adherent, contributing to relapses and re-hospitalizations (Herbeck et al., 2005;Gaudiano et al., 2012;Moitra and Gaudiano, 2016). Those with co-occurring presentations are at even greater risk (Herbeck et al., 2005). Present findings indicate that 85.1% of those who completed the study missed a total of 0-1 sessions, suggesting strong treatment adherence among those participants. This includes periods of detox, which are notoriously challenging times for therapy engagement. The relationship between transdiagnostic processes and treatment adherence was also investigated to determine if transdiagnostic approaches, such as CHOPS, are viable methods for targeting adherence. Psychological inflexibility demonstrated a moderate inverse relationship with treatment adherence indicating that participants exhibiting greater psychological inflexibility were also less treatment compliant. This suggests that increasing psychological flexibility could positively impact adherence in SUD, chronic mental health, and co-occurring populations. Values-based action and self-compassion were not significantly related to treatment adherence suggesting a lack of meaningful relationship between these constructs. This study is subject to several limitations for consideration. First, given the lack of control group and randomization, we cannot rule out the possibility of confounding variables influencing the results of the study. Participants were also exposed to a multitude of therapies outside of the study including Dialectical Behavior Therapy (DBT), CBT, alternative therapies, and medication therapy. It is also possible that admission into a residential program was itself behaviorally activating and provided motivation for treatment adherence. Future studies should include a comparison group in order to control for extraneous variables and assess for motivational levels or stages of change. It also may be difficult to determine if CHOPS specific activities particularly influenced outcomes as traditional ACT metaphors, common Choice Point activities, and novel CHOPS interventions were utilized. Second, because the population sample consisted of residents in an inpatient facility, threats to external validity exist. It is unclear the extent to which findings can be generalized to outpatient settings, nonclinical settings, or other inpatient settings. Additionally, the population sample was 87% Caucasian making it more difficult to generalize findings across race and ethnicity. Future studies should aim to be more inclusive and generalizable. Third, while 3-month follow-up data suggest that therapeutic gains were maintained and improved upon in some cases, 12-month follow-up is warranted. Studies supporting long-term benefits of ACT do exist; however, further analysis is needed Lanza et al., 2014). Follow-up data collection was challenging due to changing contact information and places of residence. Correspondence conducted through phone, email, Internet, and in-person conference was minimally successful, resulting in small sample sizes and underpowered t-tests. It is possible that lack of power resulted in type-II error, negatively impacting the ability to find true sleeper effects for values-based action and self-compassion. Underpowered correlational analyses were also a consequence of small sample size which may have erroneously contributed to nonsignificant findings when examining relationships with treatment adherence. Additionally, abstinence rates were not assessed at follow-up. Future studies should assess abstinence especially with regards to how it relates to transdiagnostic processes. Fourth, using a repeated measures design, such as an ANOVA or a paired sample t-test, also has drawbacks. While a repeated measures design allows for greater power and smaller sample sizes, participants are also more susceptible to carryover effects (Cleophas, 1999). This study additionally relied exclusively on self-report measures. While self-report measures provide an accessible manner of data collection, they are vulnerable to response bias when respondents present themselves in socially desirable ways (McDonald, 2008;Crutzen and Göritz, 2010). It is important to mention that the AWARE Questionnaire was modified for the purpose of assessing warning signs of substance use rather than warning signs specific to alcohol use. Phrasing was minimally amended; however, it is possible that validity was compromised as a result of the revision. CONCLUSION CHOPS demonstrated preliminary feasibility and acceptability in the treatment of SUD. To our knowledge, this is also the first application of the Choice Point model in an inpatient facility. As expected, main effects were observed in psychological inflexibility, values-based action, and self-compassion, with significant gains occurring across all levels of the intervention. Therapeutic benefits TABLE 7 | Descriptive statistics and correlations for treatment adherence, psychological inflexibility, values-based action, and self-compassion. Measure were maintained at follow-up for all three transdiagnostic processes and improved upon in the areas of psychological inflexibility and mindfulness. Transdiagnostic variables were correlated with warning signs of relapse, while treatment adherence was associated with psychological inflexibility at follow-up. These findings have important implications for the treatment of SUD and co-occurring disorders. Due to increasingly high attrition rates and limitations of managed care, interventions capable of providing early therapeutic gains is optimal. Because treatment adherence influences outcomes, targeting psychological inflexibility may provide a novel means of improving compliance. Additionally, CHOPS may be an effective alternative model for achieving longer-term abstinence as evidenced by sustained treatment gains and sleeper effect findings. Interventions proficient at maintaining therapeutic benefits while building upon those gains are desirable in a population where relapse is likely to occur. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, and further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Lancaster General Health Institutional Review Board. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS BB and KK collaborated on study design and data collection. BB performed statistical analyses and data interpretation. KK created manuscript tables. All authors were involved in drafting, revising, and approving submission of the manuscript. Frontiers in Psychology | www.frontiersin.org 14 October 2021 | Volume 12 | Article 758356 be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Copyright © 2021 Berman and Kurlancheek. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
2021-10-28T13:41:15.724Z
2021-10-28T00:00:00.000
{ "year": 2021, "sha1": "f89ce5d01b0909f739e57f32ed2903924beaa31e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.758356/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f89ce5d01b0909f739e57f32ed2903924beaa31e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
263359797
pes2o/s2orc
v3-fos-license
Markers of Bronchiolitis Obliterans Syndrome after Lung Transplant: Between Old Knowledge and Future Perspective Bronchiolitis obliterans syndrome (BOS) is the most common form of CLAD and is characterized by airflow limitation and an obstructive spirometric pattern without high-resolution computed tomography (HRCT) evidence of parenchymal opacities. Computed tomography and microCT analysis show abundant small airway obstruction, starting from the fifth generation of airway branching and affecting up to 40–70% of airways. The pathogenesis of BOS remains unclear. It is a multifactorial syndrome that leads to pathological tissue changes and clinical manifestations. Because BOS is associated with the worst long-term survival in LTx patients, many studies are focused on the early identification of BOS. Markers may be useful for diagnosis and for understanding the molecular and immunological mechanisms involved in the onset of BOS. Diagnostic and predictive markers of BOS have also been investigated in various biological materials, such as blood, BAL, lung tissue and extracellular vesicles. The aim of this review was to evaluate the scientific literature on markers of BOS after lung transplant. We performed a systematic review to find all available data on potential prognostic and diagnostic markers of BOS. Introduction A lung transplant is considered the best therapeutic option for patients with end-stage lung disease. Survival after lung transplant continues to improve: median survival is now 6.9 years thanks to the efficacy of prophylaxis, new drugs, and better risk stratification [1]. However, graft failure is responsible for 22.7% of deaths in the interval between 30 days and 1 year after transplant. After the first year, chronic lung allograft dysfunction (CLAD) is the leading cause of death [2]. The international society for heart and lung transplantation (ISHLT) consensus recently re-defined CLAD and related phenotypes [3,4]. CLAD is defined as a persistent decline (≥20%) in the measured forced expiratory volume in the first second (FEV1) from the post-transplant baseline. The date of CLAD onset is defined as the date of the first FEV1 decline ≤80% of baseline [4]. Bronchiolitis obliterans syndrome (BOS), restrictive allograft dysfunction (RAS) and the newly defined mixed-phenotype result are the three major phenotypes of CLAD. BOS is the most common form of CLAD and is characterized by airflow limitation and an obstructive spirometric pattern unexplained by acute rejection, infection or other coexistent Molecular and Immunological Mechanisms Involved in the Pathogenesis of BOS The molecular and immunological mechanisms underlying BOS are unclear, and no disease-specific biomarkers have yet been found [25]. From the histopathological point of view, BOS is characterized by the accumulation of a submucosal extracellular matrix, muscle cell hyperplasia and complete obliteration of the airway lumen with partial destruction of the original smooth muscle layer. Chronic inflammation occurs in these patients and leads to excessive recruitment and/or activation of (myo-)fibroblasts in small peripheral airways [26]. Evidence of aberrant angiogenesis has also been reported in bronchiolitis obliterans lesions, consisting of the proliferation and enlargement of the microvasculature [14,15]. This mechanism could explain the flow limitation and dyspnoea typical of patients with BOS [27][28][29]. Antibodies also play a crucial role in the development of BOS. Graft antigen antibodies are closely linked to the development of BOS in lung transplant patients because graft-reactive antibodies induce activation of the complement system and degradation of lung tissue [30]. Concerning immunity, an increase in Th1 cells or cytokine-related Th1 in blood or BAL fluid of BOS patients suggests that Th1 cells play a role in the process of CLAD [16,17]. Th17 seems to be involved in the development of BOS, although the immunological mechanisms are largely unexplored. Th17 supports chronic inflammation that may favour chronic dysfunction through airway fibrosis, neutrophil chemotaxis and/or expansion of autoantibodies [31]. Moreover, an imbalance in Th17 and regulatory T cells, resulting in an increased Th17/Treg ratio, is linked to chronic dysfunction. Inflammatory factors in a BOS microenvironment can favour the differentiation of Treg Antibodies also play a crucial role in the development of BOS. Graft antigen antibodies are closely linked to the development of BOS in lung transplant patients because graftreactive antibodies induce activation of the complement system and degradation of lung tissue [30]. Concerning immunity, an increase in Th1 cells or cytokine-related Th1 in blood or BAL fluid of BOS patients suggests that Th1 cells play a role in the process of CLAD [16,17]. Th17 seems to be involved in the development of BOS, although the immunological mechanisms are largely unexplored. Th17 supports chronic inflammation that may favour chronic dysfunction through airway fibrosis, neutrophil chemotaxis and/or expansion of autoantibodies [31]. Moreover, an imbalance in Th17 and regulatory T cells, resulting in an increased Th17/Treg ratio, is linked to chronic dysfunction. Inflammatory factors in a BOS microenvironment can favour the differentiation of Treg into Th17 cells through the production of IL-6, modifying the Th17/Treg ratio and facilitating chronic dysfunction [32]. Concerning the role of B cells, no clear molecular mechanisms leading to chronic rejection have been established. An accumulation of B cells is observed in the lung tissue of patients with CLAD [33], and the presence of donor-specific HLA antibodies is related to BOS development [21,22,[34][35][36][37]. Toll-like receptors (TLR) have a central role in the pathogenesis of BOS. Infections, ischemia time and ischemia-reperfusion injury can be activated by these receptors. Colonisation by pathogens such as Aspergillus fumigatus and Pseudomonas aeruginosa can also stimulate TLR and chemokine production (CXCL1 and CXCL5) [23,24]. The selected articles are reported in Table 1. Tissue Markers The study of tissue from lung transplant patients can be useful for investigating the expression of genes, proteins and molecules specific to BOS lesions. Although biopsy is an invasive procedure, tissue specimens can provide information about molecular and immunological aspects. Liver Kinase B1 Gene Liver kinase B1 (LKB1) is a serine/threonine protein kinase 11 implicated in tumour suppression and in the regulation of cell metabolism [69]. A major function of LKB1 is the activation of 5 AMP-activated protein kinase (AMPK), a regulator of metabolism and cell growth, which plays an inhibitory role in the epithelial-mesenchymal transition, tissue fibrosis and malignant transformation [70]. LKB1 was originally identified as the product of a loss of function mutation in different genetic syndromes and malignancies [71]. In non-small cell lung cancer, loss of LKB1 is associated with a more aggressive phenotype of this tumour [72,73]. Low expression of LKB1 is associated with a higher risk of organ rejection and with the occurrence and progression of human chronic graft-versus-host disease after bone marrow transplant [74]. A recent study [38] compared LKB1 activities in biopsies from newly diagnosed BOS and stable lung transplant patients, demonstrating significant downregulation of the LKB1 gene in BOS [38]. In line with the literature, the results of the study suggest that the downregulation of LKB1 may lead to tissue fibrosis, facilitating the development of BOS and that this protein could, therefore, be a reliable biomarker of risk of BOS. Checkpoint Molecules Different cell subsets, including dendritic cells, macrophages and T cells, as well as anti-HLA antibodies and interleukins, are implicated in chronic rejection and are involved in the expression and regulation of immune checkpoints [75][76][77]. The role of immune checkpoints in the development of BOS has been partially investigated. Programmed death 1 (PD1) and cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) are proteins expressed on the surface of T and B cells that contribute to the down-regulation of the immune system while promoting self-tolerance by suppressing T cell inflammatory activity [78,79]. The ligand of PD-1, named PDL1, binds PD1, suppressing the effector function of T cells [80]. The role of immune checkpoints has been studied widely as a targeted therapy in cancer [81]. The possible role of the expression and function of immune checkpoint molecules in chronic allograft dysfunction is not clear. Sporadic experiences with immune checkpoint inhibitor treatments of kidney and heart transplant patients with cancer have shown that administration of these molecules results in the rapid development of severe rejection [82]. Righi et al. used immunohistochemistry [46] to detect significantly lower PD-1-, PDL1and CTLA4 in BOS patients than in those with RAS. The exhausted phenotype of PD-1 cells, expressed in T cells and regulatory T cells (Tregs), proved to be significantly lower in RAS than in BOS patients. The triggers that chronically stimulate Tregs can determine their loss of function. This may contribute to the uncontrolled immune response that characterizes BOS. VEGF/VEGFR2 Vascular endothelial growth factor (VEGF) is a strong angiogenic factor essential for angiogenesis. VEGF is active in physiological functions such as bone formation, haematopoiesis and wound healing [83]. It is produced by different cell lines, including macrophages, keratinocytes and fibroblasts. Although VEGF is essential for physiological homeostasis in various cell lines and tissues, it is also important in the pathogenesis of tumour growth and metastasis since it mediates vascular permeability and neo-angiogenesis [84][85][86][87]. VEGF and VEGF receptor 2 (VEGFR2) have been implicated in pulmonary vascular remodelling and in chronic rejection [88,89]. In this regard, a recent study [63] analysed concentrations of VEGF produced by distal-derived lung fibroblasts, before and after stimulation with TGF-β, in LTx patients with BOS and in stable LTx recipients and healthy controls, at baseline and follow-up (3, 6 and 12 months). It emerged that VEGF synthesis from distal-derived lung fibroblasts was significantly lower 3 months after lung transplant than in non-transplanted subjects and was increased at 6 months and 12 months after transplant, achieving the same level as in the healthy controls. These findings demonstrate the processes of the remodelling of pulmonary and bronchial vessels after lung transplant. Stimulation with TGF-β significantly enhanced the synthesis of VEGF in fibroblasts from lung transplant patients and healthy non-transplanted subjects. After TGF-β1 stimulation, levels of VEGF in fibroblasts obtained from patients who developed BOS 12 months after LTx were significantly lower than in patients without chronic rejection. The number of cells expressing markers of collagen production, VEGFR2/p4OH, was higher in patients with BOS, suggesting that BOS is related to chronic inflammation and subsequent fibrosis. CXCL9 and CXCL10 C-X-C motif chemokine ligands 9 and 10 (CXCL9 and CXCL10) are chemokines stimulated by IFN-gamma that induce chemotaxis and promote differentiation and multiplication of leukocytes. They have also been associated with a wide spectrum of inflammatory lung diseases through pro-and antifibrotic effects [90]. Common mechanisms of rejection have been reported in different grafted organs, leading researchers to combine data from multiple organs and formulate a common rejection module consisting of 11 genes (CD6, TAP1, CXCL10, CXCL9, INPP5D, ISG20, LCK, NKG7, PSMB9, RUNX3 and BASP1) overexpressed during allograft rejection [91]. In recently published data, chronic rejection has been investigated by comparing the expression of these genes in lung tissue of BOS and RAS patients [66], showing lower expression of TAP1, CXCL9 and CXCL10 in BOS than in RAS patients. The results of the study suggest that the expression of these genes could be a useful marker to identify BOS. Exhaled Breath Exhaled breath is a useful, easy to obtain and non-invasive method in clinical practice. It is a source of information about mechanisms occurring in the alveoli and small airways. Little is yet known about the role of exhaled biomarkers in the development and pathogenesis of BOS. Nitric Oxide Nitric oxide (NO) is an important mediator involved in chronic inflammation of the lung [92]. At the lung level, NO acts as a vasodilator, bronchodilator, neurotransmitter and inflammatory mediator. High levels of exhaled nitric oxide have been reported in LTX patients with acute rejection, infection and lymphocytic bronchiolitis. The NO produced by the respiratory system includes alveolar concentrations of NO (CalvNO) and maximum conducting airway wall flux (J'awNO), an expression of bronchial NO [93]. Concentrations of NO in exhaled breath can be investigated by non-invasive measurement of FeNO (fraction of exhaled nitric oxide). FeNO is measured during slow exhalation from total lung capacity against a positive pressure, which may be varied to generate specific exhalation flow rates. Cameli et al. (6) investigated the potential role of FeNO and CalvNO in the diagnosis of CLAD. FeNO was performed in LTx patients, including those with BOS and healthy controls. The study showed higher values of FeNO and CalvNO in LTx patients than in healthy controls. BOS patients showed higher FeNO and CalvNO than non-BOS patients. This suggests that FeNO and CalvNO could be useful markers in the diagnosis of BOS; CalvNO showed higher sensitivity and specificity than FeNO in identifying BOS in LTx patients. Exhaled Surfactant Protein A Surfactant protein A (SP-A) is a member of the collectin family and a component of surfactant; it is mostly produced by alveolar type II cells. The main physiological function consists of reducing the surface tension of the alveoli [94,95]. Several studies have demonstrated the importance of SP-A in respiratory infections [96]. A significant reduction in SP-A in BAL fluid has been demonstrated in lung transplant recipients with BOS compared with stable patients [97]. A recent study [59] compared levels of SP-A in exhaled breath of BOS patients, stable LTx patients and healthy controls. The results indicated that SP-A in exhaled particles and the SP-A/albumin ratio were lower in the BOS group than in the BOS-free group. Low levels of SP-A in exhaled particles were associated with an increased risk of BOS. Circulating Biomarkers Lipocalin 2 (LCN2) is a 25 kDa member of the lipocalin protein family, a family sharing a common molecular structure that binds hydrophobic molecules [98,99]. From a pathogenetic point of view, LCN2 has been implicated in modulating apoptosis, showing both pro-and anti-apoptotic activities [100]. In lung transplants, higher serum concentrations of LCN2 have been reported in patients with RAS than in those with BOS. During chronic in-flammation of the lung, LCN2 reprograms the immune microenvironment and predisposes tissues to cancer development and progression [101]. Increased serum concentrations of LCN2 and activin-A have also been considered predictors of a lower likelihood of freedom from CLAD in stable LTx patients [42]. Galectins 1 and 3 Galectins are a family of 17 β-galactoside-binding proteins that modulate intracellular signalling through direct interactions with cell adhesion molecules. Galectins are also involved in the regulation of innate and adaptive immune responses [102]. Gal-1 typically functions as an anti-inflammatory and pro-resolving mediator by modulating innate and adaptive immune responses. Serum concentrations of Gal-1 may be dysregulated in various inflammatory scenarios, such as microbial infection, autoimmunity and cancer. Galectin-3 (Gal-3) has been implicated in the development of pulmonary fibrosis. Elevated concentrations of Gal-3 are associated with restrictive interstitial abnormalities, including decreased lung volumes and altered gas exchange [103]. In LTx, a study showed that concentrations of galectin-1 were higher in BOS than in stable LTx patients [45]. According to another study, concentrations of galectin-3 were significantly higher in LTX recipients with airway obstruction than in recipients without any complications [104]. Soluble CD59 CD59 is a small glycosylphosphatidylinositol-anchored protein and the sole membrane regulator of the membrane attack complex (MAC). CD59 has been investigated in different diseases as a predictive biomarker of outcome and disease progression [105]. In sepsis, serum concentrations of soluble CD59 (sCD59) were correlated with the severity of organ damage [106]. Budding et al. were the first to investigate the role of CD59 in lung transplant, showing that in chronic lung allograft dysfunction, BOS patients had higher serum concentrations of sCD59 than non-BOS patients [56]. MMP-3 Matrix metalloproteinase-3 (MMP-3) or stromelysin-3 is a protein expressed by different subsets of cells [107]. MMP-3 has been implicated in a range of pathological processes, including acute lung injury, pulmonary fibrosis and lung cancer. It may also promote the breakdown of alveolar epithelial barriers and acute inflammatory responses, particularly in a setting of ventilator-induced lung injury. Genetic deletion of MMP-3 in mice confers protection against bleomycin-induced fibrosis, while transient overexpression of MMP-3 results in profibrotic responses in rat lungs. This presumably occurs by induction of the Wnt-β-catenin pathway by MMP-3. The protein also mediated the degradation of the ECM, enhancing a profibrotic environment, which may affect the phenotype of fibroblasts and promote further deposition of ECM and fibrosis [107,108]. MMP-3 was consistently identified in patients with BOS, suggesting that this protein may be BOS-specific and linked to the development of the disease. In particular, plasma samples from hematopoietic cell transplant recipients with incipient BOS were compared with those of patients with lung infections, chronic graft-versus-host disease without pulmonary involvement and chronic complications after hematopoietic cell transplant. Plasma concentrations of MMP-3 were not elevated prior to the onset of BOS. MMP-3 was only elevated when FEV1 decreased and could, therefore, be a non-invasive tool for diagnosis rather than a prognostic marker [57,109]. MMP-9 Matrix metalloproteinase-9 (MMP-9), also known as 92 kDa type IV collagenase, 92 kDa gelatinase or gelatinase B (GELB), belongs to the zinc-metalloproteinase family involved in the degradation of the extracellular matrix [110]. MMPs play key roles in developing and maintaining adequate oxygenation in health and disease. Broadly, except for MMP2 and MMP14, most deletions in MMPs fail to affect lung development; however, their individual absence can alter the pathophysiology of respiratory diseases. Specifically, under stress conditions, such as acute respiratory infection and allergic inflammation, MMP-9 and others can play a protective role through bacterial clearance and production of chemotactic gradients [111]. In plasma, MMP-9 was associated with BOS and predicted the occurrence of CLAD 12 months before it was diagnosed on the basis of lung function [52,112]. In paper [58] included in our review, levels of MMP-9 were investigated via zymography and gelatin degradation. As expected, BAL concentrations of MMP-9 were elevated in BOS patients together with neutrophil percentage. As demonstrated in other studies [52], MMP-9 contributes to the remodelling processes, leading to airway obstruction. In conclusion, MMP-9 can be considered a prognostic and diagnostic biomarker of BOS. There are many studies that also sustain that the epithelial-mesenchymal transition is the main pathogenetic mechanism in BOS. For example, high levels of MMP-9 have been found in BAL of CLAD patients [112], and TGF-beta (a major inducer of epithelialmesenchymal transition) has been reported at high concentrations in BAL of BOS patients [113]. 4.1.6. Self-Antigens (SAgs): K-Alpha 1 Tubulin and Collagen-V Antibodies to k-α1 tubulin (K-α1T) and collagen type V (Col V) are both associated with the development of CLAD [114]. K-α1T is a gap junction protein with mostly intracellular functions Col V that is usually hidden in the structure of collagen type I in lung tissue extracellular matrix, but when exposed, it can act as an immunogenic ECM protein [115]. Both neo-antigens can be exposed after graft injury, leading to the induction of an immune response that may be aggravated by the loss of peripheral tolerance through immunosuppression of regulatory T-cells [114]. When bound to alveolar epithelial cells, K-α1T antibodies directly influence the onset of airway obliteration by inducing an increase in fibrogenic growth factor expression and fibroproliferation. Higher serum concentrations of anti-K-α1T and anti-col V after LTx have been associated with a higher risk of BOS in LTx patients. Saini et al. showed that anti-K-α1T levels in serum and BAL fluid were significantly higher in patients with BOS than in those without BOS, matched for time since transplant. BAL concentrations of anti-Col V were also significantly higher in BOS than in non-BOS patients [116]. The occurrence of antibodies directed against self-antigens after LTx was shown to be linked to the development of donor-specific HLA antibodies in the recipient and could be an interaction between alloand auto-immunity. Antibodies to Col V and K-α1T were found in 70% of LTx patients who had antibodies against self-antigens after the transplant. Patients with pre-transplant self-antibodies had shorter BOS-free survival than LTx patients who did not have pre-transplant self-antigen antibodies, suggesting that the measurement of these antibodies may aid risk prediction for BOS after LTx [117]. It was recently demonstrated that concentrations of K-α1T were higher in exosomes from LTx patients with BOS [40,118]. Earlier studies also demonstrated that exosomes isolated from LTx patients with BOS contained lung SAgs (Col-V and K-α1T). Exosomes from LTx contained significantly higher levels of Col-V 6 and 12 months before BOS. Regarding concentrations of HLA-DR and HLA-DQ, Bansal et al. demonstrated that they were significantly higher in exosomes isolated from LTx patients with RAS than in those of patients with BOS. RAS exosomes also contained higher levels of NfkB, 20S proteasome, PIGR, CIITA and Col-V than BOS exosomes. The concentration of K-α1T was higher in exosomes from LTx patients with BOS [40,118]. Bronchoalveolar Lavage Fluid Biomarkers Fibro-bronchoscopy and BAL are procedures currently used in monitoring protocols for LTx recipients. They have provided much data in the search for biomarkers of CLAD in BAL fluid. Epithelial Cell Death Markers in BAL Fluid Epithelial cells play an important role in maintaining airflow and in lung defences, both as a physical barrier and through innate and adaptive immune responses [119]. Epithelial injury has been proposed as a mechanism in the pathogenesis of CLAD [120]. Epithelial cell death can be detected by the release of specific intracellular proteins, such as cytokeratins, which are cytoskeletal proteins that maintain the internal organisation and dynamic processes of cells [121]. During necrosis and apoptosis, lung epithelial cells release full-length cytokeratin-18 (CK18). The presence of this protein in BAL fluid may indicate processes of apoptosis and necrosis; M30, obtained from the cleavage of CK18, is an expression of cell apoptosis, while M65, which reflects intact CK18 and caspase-cleaved CK18, is related to cell necrosis. Levy et al. established possible roles of M30 and M65 in BOS [122]. They collected BAL fluid from these patients routinely for 24 months after transplant. Acute inflammation was indicated in this cohort by >3% neutrophils in the cell count and traces of M30 and M65. The results showed that M30 and M65 were not related to neutrophil count, and that M65 levels were lower in patients with BOS than in those with RAS, while high levels of M65 were associated with worse survival in CLAD patients [67]. These results suggest that M65 could be a useful diagnostic marker of BOS. Humoral and Cell Immunity Biomarkers in BAL Fluid Chronic rejection after transplant is predominantly mediated by T cells; however, there is recent evidence that B-cell activation with the production of donor-specific antibodies (DSA) directed against specific human leukocyte antigens (HLA) of the graft is involved [123]. Circulating DSA promotes complement activation, resulting in lung injury [124]. According to the ISHLT definition of antibody-mediated rejection, the presence of these antibodies in BAL fluid is correlated with an increased risk of CLAD [125], especially RAS [126]. In a retrospective study, Vandermeulen et al. [58] investigated DSA in blood in relation to complement activation and immunoglobulins in BAL, as well as the correlation of these humoral markers with airway remodelling. The association between K-α1T and SAgs that we previously described in the serum of BOS patients was also analysed in BAL fluid in relation to club cell secreted protein (CCSP). CCSP is an anti-inflammatory protein used as a biomarker for respiratory stress in experimental models of acute and chronic lung injury. Lung transplant patients diagnosed with BOS showed a significant decline in BAL fluid concentrations of CCSP 7-9 months before BOS could be diagnosed clinically. Those who developed SAgs at the time of diagnosis of BOS had lower BAL fluid levels of CCSP than did stable LTx patients without SAgs. Regarding humoral immunity, higher levels of immunoglobulins, without specific distinctions of sub-classes (IgA, IgM, IgG1,2,3,4) and complements (C1q and C4d), were found in the BAL fluid of patients with AMR overlapping with CLAD and a prevalence of RAS phenotype than in patients with BOS and controls, confirming the finding of Roux et al. [126]. The roles of CD4+ and CD8+ cells in solid transplant patients are still unclear. Activation of these cells in acute cell rejection after lung transplant [127] made it necessary to understand their importance in chronic rejection [97]. Hayes et al. [60] described the BAL fluid T cell panel in LTx patients affected with BOS. They demonstrated that LTx patients without BOS did not show changes in CD4+ and CD8+ cells in BAL samples within two years of transplant. On the contrary, recipients who developed BOS showed a decline in CD4+ and an increase in CD8+ cells, with a significant decline in the CD4:CD8 ratio in the same matrix in the same period. Subject to the small population of this study, the T cell profile could be considered a possible biomarker for BOS and probably for other CLAD phenotypes. A recent paper also demonstrated a predominance of Th17 cell subtypes and a depletion of Tregs and Bregs in the BAL fluid of patients with acute rejection with respect to BOS patients [43]. Cytokines and Chemokines in BAL Fluid In lung transplant patients, cytokine production occurs in two steps: first, in an early antigen-independent cascade triggered by the recipient's immune system, surgical trauma, donor lung status or ischemic reperfusion injury; second, in a late antigen-dependent cascade that directs activated recipient lymphocytes into the graft [128][129][130]. The balance between the production of pro-and anti-inflammatory cytokines and chemokines is critical for airway repair and, therefore, also for the development of CLAD [131]. Berastegui et al. [54] examined the production of cytokines in lung transplant recipients who developed CLAD. They found that IL-5 seemed to be the most significant biomarker, being more highly expressed in RAS than in BOS. A study by Itabashi et al. also reported that the loss of CCSP may contribute to increased production of IL-8, suggesting that CCSP plays an important role in regulating the proinflammatory cascade [44]. Gene Expression in BAL Fluid The common rejection module is composed of 11 genes (CD6, TAP1, CXCL10, CXCL9, INPP5D, ISG20, LCK, NKG7, PSMB9, RUNX3 and BASP1) that are overexpressed during allograft dysfunction. It has been studied in many types of solid transplants to quantify inflammation in tissues [132]. Sancreas et al. [66] developed a study to understand how the module could have a biomarker role in acute rejection and CLAD. They performed BAL, trans-bronchial biopsy and histological studies in explant organs, discovering that the expression of these genes is higher in acute rejection than in chronic rejection. The most relevant circulating biomarkers and gene expression are represented in Figure 2. without BOS did not show changes in CD4+ and CD8+ cells in BAL samples within two years of transplant. On the contrary, recipients who developed BOS showed a decline in CD4+ and an increase in CD8+ cells, with a significant decline in the CD4:CD8 ratio in the same matrix in the same period. Subject to the small population of this study, the T cell profile could be considered a possible biomarker for BOS and probably for other CLAD phenotypes. A recent paper also demonstrated a predominance of Th17 cell subtypes and a depletion of Tregs and Bregs in the BAL fluid of patients with acute rejection with respect to BOS patients [43]. Cytokines and Chemokines in BAL Fluid In lung transplant patients, cytokine production occurs in two steps: first, in an early antigen-independent cascade triggered by the recipient's immune system, surgical trauma, donor lung status or ischemic reperfusion injury; second, in a late antigen-dependent cascade that directs activated recipient lymphocytes into the graft [128][129][130]. The balance between the production of pro-and anti-inflammatory cytokines and chemokines is critical for airway repair and, therefore, also for the development of CLAD [131]. Berastegui et al. [54] examined the production of cytokines in lung transplant recipients who developed CLAD. They found that IL-5 seemed to be the most significant biomarker, being more highly expressed in RAS than in BOS. A study by Itabashi et al. also reported that the loss of CCSP may contribute to increased production of IL-8, suggesting that CCSP plays an important role in regulating the proinflammatory cascade [44]. Gene Expression in BAL Fluid The common rejection module is composed of 11 genes (CD6, TAP1, CXCL10, CXCL9, INPP5D, ISG20, LCK, NKG7, PSMB9, RUNX3 and BASP1) that are overexpressed during allograft dysfunction. It has been studied in many types of solid transplants to quantify inflammation in tissues [132]. Sancreas et al. [66] developed a study to understand how the module could have a biomarker role in acute rejection and CLAD. They performed BAL, trans-bronchial biopsy and histological studies in explant organs, discovering that the expression of these genes is higher in acute rejection than in chronic rejection. The most relevant circulating biomarkers and gene expression are represented in Figure 2. NMR Spectroscopic Detection of Metabolites in BAL Fluid There have been few studies on the possible utility of NMR spectroscopy in respiratory diseases. Some have focused on the mouse respiratory system [133], on preterm infants with respiratory distress syndrome [134] and on paediatric patients with cystic fibrosis, in search of a correlation between the degree of airway inflammation and the number of metabolites in BAL fluid [135]. Although little information is available about the metabolic profile of BAL fluid in adults, and even less in BOS patients, Ciaramelli et al. developed a pilot study to assess the suitability of using NMR spectroscopy to explore the metabolic profile of the BAL fluid of lung transplant recipients with or without BOS [55]. If this method could predict BOS, early intervention to prevent irreversible lung damage may be possible. NMR is able to detect different metabolites, such as amino acids, Krebs cycle intermediates, mono-and disaccharides, nucleotides and phospholipid precursors. The authors tried to understand whether some of these molecules could be early biomarkers of BOS. They found high levels of some branched-chain amino acids (valine, leucine, isoleucine) in the BAL fluid of BOS patients, and these levels were related to the severity of BOS. Another potential role is played by the taurine/hypotaurine pathway. Taurine is a regulator of cell volume via membrane stabilisation and has antioxidative, anti-inflammatory and anti-apoptotic properties [136]. Taurine is a weak agonist of chloride-permeable gamma-aminobutyric acid type A receptors (GABA-A R) and glycine receptors (GlyR) that are located not only in neural synapses but also in the central nervous system and the lungs. This suggests that taurine plays a role in the lungs, potentiating the relaxation of airway smooth muscle cells through binding to GABA-A R and secretion of mucus by goblet cells [137]. The release of ROS by neutrophils and macrophages during inflammatory states could stimulate a decrease in the uptake of intracellular taurine and an increase in the taurine release into extracellular fluids, with a consequent increase in mucus secretion. Increased levels of taurine in the BAL fluid of BOS patients may, therefore, reflect inflammation-induced osmotic stress or epithelial cell damage. Since taurine accumulates in the cytoplasm of neutrophils and other leukocytes, its presence may also suggest high levels of leukocytes in the airways (due to a state of inflammation) [138]. Moreover, levels of lactate in BAL fluid are related to an inflammatory state. BOS patients with increased BAL fluid levels of lactate could have an inflammatory status that exacerbates the disease. Together these results suggest that there are possible markers for early diagnosis of BOS. LKB1 from Tissue Downregulation of LKB1 has been demonstrated in circulating exosomes prior to clinical diagnosis of BOS, suggesting that this enzyme may have a role in the pathogenesis of the syndrome. In particular, incubation of BOS-exosomes also decreased LKB1 expression and induced epithelial-mesenchymal transition markers in an air-liquid interface culture method [38]. This study provided new evidence that exosomes released from transplanted lungs undergoing chronic rejection are associated with an inactivated tumour suppressor gene, LKB1, and this loss induces epithelial-mesenchymal transition, leading to CLAD in humans. miRNAs Recent studies suggest that epigenetic regulation of microRNAs might play a role in the development of BOS. Di Carlo S. et al. found, through in situ hybridisation, the dysregulation of two candidates, miR-34a and miR-21, as pathogenetic factors of BOS [139]. Xu Z. et al. identified miR-144 as the most significant altered miRNA. miR-144 is a principal critical regulator of TGF-β signalling, involving an increase in Smad-2, Smad-4, FGF and VEGF, leading to diminished fibrogenesis [140]. Recently, the alteration of Smad-4 has been found in association with miR-155-5p and miR-23b-3p, which resulted in an altered expression in responders to extracorporeal photopheresis [141]. Budding et al. showed that miR-21, miR-29a, miR-103 and miR-191 levels were significantly higher in BOS+ patients prior to clinical BOS diagnosis [142]. Most studies have focused on extracellular miRNAs as potential biomarkers since they are stable and can be detected in blood, urine and other body fluids by simple, sensitive and relatively cheap assays, even after years of sample storage. miR-21 and miR-29a are the most commonly observed aberrant miRNAs in human cancers. Post-transcriptionally, miR-21 downregulates the expression of the tumour-suppressor gene PTEN and stimulates growth and invasion in non-small cell lung cancer. Inhibition of miR-21 reduces cancer cell proliferation, migration and invasion. miR-103 has recently been associated with the metastatic capacity of primary lung tumours [143]. Bozzini S. et al. suggested that miR-21-3p has been identified in the context of OB fibrogenesismiR-21-3p playing a role as a profibrotic effector in CLAD fibro-obliterative processes [144]. Whole Blood and BAL Cell Subsets To avoid the onset of BOS, it is important to define the cause of inflammation and to understand how immune cells are implicated in inflammation and/or repair. Immunosuppression therapy is the major management strategy for preventing the rejection of the new organ by activation of the immune system [145]. Patients receive a regimen of immunosuppressants that reduces the differentiation and proliferation of lymphocytes, including B and T cells [146]. B Cells Many recent studies have focused on the role and association between long-term allograft survival and B cells. B lymphocytes are a cell population that expresses clonally different cell-surface immunoglobulin (Ig) receptors that recognize specific antigen epitopes [147]. Phenotypically, B cells are differentiated on the basis of the expression of specific cell-surface markers [148]. Transitional B cells are the most immature peripheral B cells and express CD24 and CD38. Whereas human memory B cells can be divided into different populations depending on the expression of markers IgD and CD27: naïve (IgD+CD27-), non-switched memory (IgD+CD27+), switched memory (IgD-CD27+) and CD27-negative (IgD-CD27-) B cells, also known as double-negative B cells (DN B cells). According to the literature, memory B cells and plasmablasts are correlated with early acute antibody-mediated rejection [149], while naïve and transitional B cells are associated with tolerance in kidney and liver transplants [150]. The peripheral B cell subset has been investigated in lung transplant recipients with and without BOS and in healthy controls. An increase in DN B cells was recorded in patients with BOS, along with a decrease in transitional and naïve B cells in BOS patients with respect to healthy controls [151]. The medication used to prevent the development of allograft dysfunction usually prevents the proliferation of B cells. During BOS, the immune system is exposed to abundant antigens due to chronic inflammation and repair processes [152]. It is probable that naïve and transitional B cells differentiate, and their peripheral frequency decreases while DN B cell numbers increase. Regulatory B cells (Bregs) function as immunosuppressors through the production of anti-inflammatory cytokines, such as IL-10, IL-35 and transforming growth factor β (TGF-β) [153]. Regarding B cells, CD19+CD24hiCD38hi are reported to be elevated in the BAL of acute rejection patients [154]. They suppress immunopathology by prohibiting the expansion of pathogenic T cells and other proinflammatory lymphocytes. Regulatory B cells support immunological tolerance and may be involved in maintaining long-term allograft function. CD9 is a cell surface glycoprotein of the tetraspanin family, characterized by four transmembrane-spanning domains and two extracellular domains [155]. Several studies implicate CD9 in the immunosuppressive activity of Bregs [156]. Brosseau et al. [68] demonstrated the importance of Bregs and the CD9 marker, a powerful predictive marker of stability and long-term BOS-free survival after lung transplant. Twenty-four months after transplant, plasma concentrations of CD9 were higher in stable patients than in those who would develop BOS. CD9 B-cell frequencies allow excellent discrimination between stable patients and those destined to develop BOS at 24 months. T Cells CD8+ and CD4+ T cells are two subpopulations of the adaptive immune system: CD4+ T cells are helper T cells that assist other lymphocytes in evoking an immune response [157]. [158]. They are also involved in transplant rejection. Durand et al. [48] demonstrated that the CD4+ and CD8+ compartments are not biased in BOS patients. An increased proportion of circulating CD4+CD25hiFoxP3+ T cells was reported one month after LTx in patients who proceeded to develop BOS within 3 years. These cells may act as negative regulators of BOS development. Although this result is encouraging, the underlying mechanisms leading to the increased proportion of regulatory CD4+CD25hiFoxP3+ T cells in BOS patients remain unclear. CD8+ T cells are cytotoxic T cells that kill virus-infected cells and tumour cells Regulatory T cells modulate the activation and proliferation of other immune cells and are crucial for maintaining T-cell tolerance to self-antigens. They secrete different immunosuppressive cytokines, such as IL-10 and TGF-β [134,135]. Piloni et al. [50] analysed the long-term peripheral kinetics of Tregs in lung transplant patients and assessed their association with different clinical variables. Previous studies demonstrated that peripheral Tregs may be a major regulatory subset in lung transplant. The latest results confirmed the role of Tregs in lung graft acceptance/rejection. Peripheral Treg counts fell significantly in CLAD patients. The degree of this decrease was correlated with the severity of BOS. In order to determine the phenotype of Tregs, Bregs and Th17 cells, samples of peripheral blood from stable lung transplant patients were analysed [43]. An increase in Th17 subtype percentages in PBMCs and BAL fluid and a reduction in Tregs and hence in the Treg/th17 ratio were observed in acute rejection patients. It was demonstrated that cytotoxic and proinflammatory lymphocyte percentages increased after transplant [162]. To explore their role, Hodge et al. [65] measured intracellular cytotoxic mediator granzyme B, interferon-gamma, tumour necrosis factor (TNF), alpha proinflammatory cytokines and CD28 in blood, BAL fluid and large airways in patients with BOS. The results indicated a significant decrease in T cells, an increase in NKT-like cells and an increase in CD8+ T and NKT-like cells in BOS patients with respect to stable patients and healthy controls. In detail, the percentages of senescent CD28null CD8+ T and CD8-T and NKT-like cells increased in BAL fluid and in large and small airways in patients with BOS. Loss of CD28 was associated with more T and NKT-like cells expressing granzyme B, IFN-γ and TNF-α. Loss of CD28 seemed to be associated with repeated antigen stimulation, as demonstrated in chronic obstructive pulmonary disease [163]. A good idea may be to find some treatment options targeting the proinflammatory nature of these cells. Monocyte-Macrophage Lineage The monocyte-macrophage lineage plays an important role in the immunopathology of chronic rejection [164]. Monocytes recognize "danger signals" via pattern recognition receptors. They conduct phagocytosis, present antigens, secrete chemokines and proliferate in response to infection and injury. Human blood monocytes can be divided into three subsets, classical, intermediate and non-classical, based on CD16 and CD14 expression [165]. To distinguish the non-classical and intermediate subsets, another marker, 6-sulfo LacNAc (SLAN), has been identified. Described for the first time as a cell-surface marker of a certain type of dendritic cell in human blood, SLAN is a carbohydrate residue associated with the non-classical monocyte subset [166]. The functions of monocyte subsets are different and also depend on the inflammatory pathology. In the lungs, the classical subset can differentiate into pulmonary dendritic cells and macrophages, whereas the functions of the other two subsets are still unknown [167]. Schreurs et al. analysed peripheral monocyte subsets in LTx patients with and without BOS [61] and found a reduction in the number of non-classical monocytes, i.e., those expressing SLAN-positivity, in the former, whereas the expression of SLAN-positivity was higher in BOS patients. To highlight the activity of T cell co-stimulation and antigen presentation, they also stained blood monocytes for surface markers HLA-DR and CD86. The only monocyte marker that proved significant in BOS was HLA-DR, which showed a significant increase in expression on non-classical monocytes. Although BOS is an outcome of chronic inflammation, it does not lead to increased production of monocytes. Further studies will be necessary to validate these results. Whole Blood Gene Expression Profiles Performing a non-invasive microarray gene expression profiling of whole blood, Danger et al. [49] identified and validated POU2AF1 (POU Class 2 Homeobox Associating Factor 1), TCL1A (T-cell leukaemia/lymphoma protein 1A) and BLK (B-lymphoid tyrosine kinase) as three predictive biomarkers of BOS more than 6 months before diagnosis. By monitoring these three genes, it proved possible to stratify patients on the basis of BOS risk. POU2AF1 is a B cell transcriptional coactivator implicated in B cell development and function [168]. It is also expressed in T cells and functions as a T-cell-dependent B cell response. BLK encodes a non-receptor protein, tyrosine kinase, and is involved in the regulation of B cell receptor signalling [169]. TCL1A, known as an oncogene, acts as a coactivator of serine-threonine kinase Akt and promotes cell survival, growth and proliferation [170]. TCL1A has been reported to be downregulated in the peripheral blood of patients with BOS before disease onset. This is in line with the findings of another study [171], where the gene was downregulated at the time of acute allograft rejection of transplanted kidneys but overexpressed in tolerant patients. The exact contribution of these genes to the development of BOS remains to be investigated. Cell Culture Supernatants Matrix Metalloproteinase 9 The pathogenesis of BOS remains largely unknown. We can observe the histological characteristics of the lung tissue of LTX patients. BOS is characterized by the infiltration of lymphocytes into the bronchiole walls, followed by a fibrotic process. This leads to the deposition of the extracellular matrix (ECM) that obliterates the small airways [172]. It has been postulated that this fibrotic process begins with epithelial damage caused by alloimmune and non-alloimmune mechanisms, followed by unsuccessful repair. Fibroblasts play a central role in this process, although it is unclear whether they are activated in situ or recruited from circulation. Another role in fibrosis is played by bronchial epithelial cells (BECs) and the epithelial-mesenchymal transition, which leads to the downregulation of epithelial markers (e.g., E-cadherin) and upregulation of mesenchymal markers (e.g., vimentin) [173]. Bronchial epithelial cells, in the process of mesenchymal differentiation, produce matrix metalloproteinases and ECM proteins (including type-I collagen and fibronectin). Matrix metalloproteinase (MMP)-9 plays an important role in all diseases based on air-remodelling processes. According to this pathogenetic explanation of BOS, the allogeneic immune response is the start of the process, but the effect of T cells on BECs is unclear. To demonstrate the role of allogenic immunity via epithelial-mesenchymal transition in BOS, Pain et al. analysed the response of BECs stimulated by T-cells in synergy with TGF-beta [52]. They set out to detect MMP-9 in the cell culture supernatant of LTx patients. Concentrations of MMP-9 were high 12 months before lung function diagnosis, suggesting the role of this metalloproteinase as an early biomarker of CLAD. However, MMP-9 did not distinguish RAS and BOS. HRCT Quantitative Image Analysis Score Quantitative image analysis (QIA) of the lung parenchyma could be a useful tool for phenotyping CLAD patients. It could also have prognostic value in terms of survival [39]. The QIA score is based on the proportion of lung volume affected by interstitial disease or air-trapping. It is an imaging scoring system that can be performed on lung CT images by estimating the percentage of Quantitative Ground Glass (QGG), Quantitative Lung Fibrosis (QLF) and Quantitative Honey Combing (QHC). QHC+QLF+QGG is a measure of total interstitial disease (QILD). Air-trapping is then scored by analysing CT scans during end-expiration or residual volume. Weigt et al. used the QIA score in LTx patients to verify its possible utility for phenotyping CLAD [39]. They analysed chest HRCT images of bilateral LTx patients within 90 days of CLAD onset and in a control (no-CLAD) group. They found that BOS cases had a lower score for interstitial disease than patients with RAS or mixed CLAD phenotypes. However, BOS showed more air-trapping. These results were also compared with the standard-phenotyping system (2019 ISHLT consensus). There was quite a good concordance between the two methods. Another important result was that the risk of death was higher in RAS and mixed phenotypes (with higher QILD) than in BOS. The QIA score on HRCT images can, therefore, be considered a diagnostic as well as a prognostic marker. Pleural Thickening Evaluated by Lung Ultrasound Pleural thickening may be focal or diffuse, and it may have different causes, both benign and malignant [174]. In interstitial lung disease, the lung parenchyma and pleura show increased tissue density, which can be demonstrated by lung ultrasonography as a significant number of B-line reverberation artefacts indicative of interstitial syndrome and pleural thickening. RAS is characterized by typical radiological alterations of interstitial lung disease that are absent in BOS, namely pleural thickening and an upper lobe-dominant fibrotic pattern [155]. By virtue of its sensitivity and specificity, lung ultrasonography can be a supplementary tool for identifying pleural thickening on upper lobes useful for phenotyping CLAD patients. In 2015, Davidsen et al. demonstrated the utility of lung ultrasonography for identifying pleural thickening in CLAD patients, assuming pleural thickening to be a marker of RAS, absent in BOS [41]. They studied BOS and RAS prospectively by chest HRCT and lung ultrasonography. Pleural thickening was more pronounced in RAS, as were chest HRCT features, demonstrating the potential utility of ultrasonographic features and radiological markers to discriminate BOS from RAS. Parametric Response Mapping of Functional Small Airway Disease by CT Parametric response mapping (PRM) is a CT method that can be used to quantify functional small airway disease and parenchymal disease. In BOS, the predominant pathogenetic mechanism is associated with small airway disease and, consequently, with airflow limitation [175,176]. Belloli et al. compared the concordance between spirometry data of bilateral lung transplant recipients and PRM results [53]. Patients with isolated FEV1 decline had significantly worse functional small airway disease than their control subjects, whereas patients with declines in FEV1 and FVC (less than 80% at baseline) had worse parenchymal disease. PRM could be useful for phenotyping CLAD patients. Moreover, functional small airway disease >30% proved to be a strong predictor of survival. Patients with these radiological characteristics lived, on average, 2.6 years less than patients with functional small airway disease <30%, suggesting a prognostic role for this marker. Figure 3 represents the radiological and tissue markers of BOS. decline had significantly worse functional small airway disease than their control subjects, whereas patients with declines in FEV1 and FVC (less than 80% at baseline) had worse parenchymal disease. PRM could be useful for phenotyping CLAD patients. Moreover, functional small airway disease >30% proved to be a strong predictor of survival. Patients with these radiological characteristics lived, on average, 2.6 years less than patients with functional small airway disease <30%, suggesting a prognostic role for this marker. Figure 3 represents the radiological and tissue markers of BOS. Future Directions Several markers have already been investigated in CLAD in different specimens, including serum, plasma, cells and tissue. Recently, clinical, functional and radiological scores have been made for the prediction of BOS. Considering that a single biomarker with appropriate accuracy and reliability able to predict the diagnosis and prognosis of BOS is not available, the purpose will be implementing a panel of biomarkers from the biological origin with radiological and/or clinic-functional findings. Furthermore, other biomarkers that showed a good correlation with the pathogenesis of BOS (such as miRNA and gene expression profile) are largely unexplored. Moreover, the idea of using biomarkers of other fibrotic lung diseases (such as IPF) may help to find a novel set of proteins with diagnostic and prognostic features. Future Directions Several markers have already been investigated in CLAD in different specimens, including serum, plasma, cells and tissue. Recently, clinical, functional and radiological scores have been made for the prediction of BOS. Considering that a single biomarker with appropriate accuracy and reliability able to predict the diagnosis and prognosis of BOS is not available, the purpose will be implementing a panel of biomarkers from the biological origin with radiological and/or clinic-functional findings. Furthermore, other biomarkers that showed a good correlation with the pathogenesis of BOS (such as miRNA and gene expression profile) are largely unexplored. Moreover, the idea of using biomarkers of other fibrotic lung diseases (such as IPF) may help to find a novel set of proteins with diagnostic and prognostic features. Conflicts of Interest: The authors declare no conflict of interest.
2022-12-21T16:15:15.607Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "c87d622470da3e57d66864f78eb79189a69731d5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/10/12/3277/pdf?version=1671611100", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c8f31cf51b34cac83ace63a5f0f48bcd11ade1f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10319594
pes2o/s2orc
v3-fos-license
Expression Profiling of Proliferation and Apoptotic Markers along the Adenoma-Carcinoma Sequence in Familial Adenomatous Polyposis Patients Introduction. Familial adenomatous polyposis (FAP) patients have a germline mutation in the adenomatous polyposis coli (APC) gene. The APC protein interacts with beta-catenin, resulting in the activation of the Wnt signalling pathway. This results in alterations in cell proliferation and apoptosis. We investigated the expression of beta-catenin and related proliferation and apoptotic factors in FAP patients, exploring the expression along the adenoma-carcinoma sequence. Methods. The expression of beta-catenin, p53, bcl-2, cyclin-D1, caspase-3, CD10, and Ki-67 proteins was studied by immunohistochemistry in samples of colonic nonneoplastic mucosa (n = 71), adenomas (n = 152), and adenocarcinomas (n = 19) from each of the16 FAP patients. Results. The expression of beta-catenin, caspase-3, cyclin-D1, and Ki-67 was increased in both adenomas and carcinomas in FAP patients, compared with normal mucosa. p53 and CD10 expression was only slightly increased in adenomas, but more frequently expressed in carcinomas. Bcl-2 expression was increased in adenomas, but decreased in carcinomas. Conclusion. This is the first study investigating collectively the expression of these molecules together in nonneoplastic mucosa, adenomas, and carcinomas from FAP patients. We find that beta-catenin and related proliferative and apoptotic factors (cyclin-D1, bcl-2, caspase-3, and Ki-67) are expressed early in the sequence, in adenomas. However, p53 and CD10 are often expressed later in the sequence, in carcinomas. Introduction Familial adenomatous polyposis (FAP) is the most common polyposis syndrome and affects 1 in 6850 to 1 in 29,000 live births [1]. FAP is caused by a germline mutation of the adenomatous polyposis coli (APC) gene [2]. Affected individuals develop up to several thousand adenomatous polyps in the colon and rectum. Furthermore, carcinoma invariably develops in FAP patients if the colon is not resected by the age of 40 or 50 [3]. The APC protein acts to bind beta-catenin, increasing beta-catenin turnover and decreasing its levels [4]. Betacatenin in turn serves a dual purpose. It forms a complex with E-cadherin as well as other catenins at the cell membrane forming adherens junctions and mediates the interaction between the cadherin-catenin complex and the actin cytoskeleton. This interaction is vital for cell-cell adhesion. In addition, when Wnt ligands are present, binding of beta-catenin to APC is inhibited and beta-catenin levels are increased in the cytoplasm and nucleus. Beta-catenin acts together with TCF/LIF as a transcription factor for multiple genes, including c-Myc, n-Myc, and cyclin-D1 [5]. Many APC mutations result in a truncated protein, with loss of the betacatenin regulatory activity. This results in activation of the target genes of the Wnt signalling pathway, many of which are involved in cell cycle and proliferation. APC mutations result in abnormal crypt proliferation in the gastrointestinal tract, resulting in adenoma formation [6]. Apart from causing multiple adenoma formation in FAP patients, it is also an early event in the adenoma-carcinoma sequence of sporadic tumours. However, the formation of invasive colorectal carcinomas requires further cumulative genetic changes, including the RAS family of genes, SMAD4, and TP53. In this study, we investigated the expression of betacatenin and downstream and related factors in nonneoplastic colonic mucosa, adenomas, and carcinomas from FAP patients, exploring the changes in expression along the adenoma-carcinoma sequence. The clinical history and histopathological reports and slides were reviewed. Multiple representative tissue blocks containing adenomas, adenocarcinomas and nonneoplastic mucosa were selected for each case. The material from the 16 patients included 152 adenomas (140 low grade dysplasia and 12 high grade dysplasia) and 19 adenocarcinomas (all moderately or poorly differentiated), including 2 separate synchronous carcinomas from 3 of the patients. There were 71 sections of nonneoplastic mucosa, of which 66 were adjacent to either adenomas or adenocarcinomas and 5 were away from tumours. 2.2. Immunohistochemistry. The expressions of beta-catenin, p53, bcl-2, caspase-3, cyclin-D1, CD10, and Ki-67 were evaluated by immunohistochemistry using the avidin biotin immunodetection complex method. Two-micron-thick sections from formalin-fixed, paraffin embedded tissue were prepared, deparaffinised, and rehydrated. Endogenous peroxidase was blocked by incubation in hydrogen peroxide. Antigen retrieval was performed by microwaving in either ethylenediaminetetraacetic acid (EDTA) or citrate buffer. Sections were incubated with normal goat serum for 10 minutes and then with the primary antibody for 60 minutes at room temperature. The sources, dilutions, and antigen retrieval buffers and duration of microwaving for the primary antibodies are presented in Table 1. The sections were washed and then incubated with goat anti-mouse biotinylated immunoglobulin for 30 minutes, followed by streptavidin peroxidase for 30 minutes. The slides were developed in 3, 3 -diaminobenzidine (DAB), followed by a haematoxylin counterstain. Sections from nonneoplastic colonic mucosa from non-FAP patients, normal breast, and endometrium were used as positive controls, and for each case a section from which the primary antibody was replaced by phosphate buffered saline was used as a negative control. Assessment of Expression. All sections were examined by light microscopy for the presence of expression and cellular distribution of the proteins (between the cell membrane, cytoplasm, and nucleus) in nonneoplastic mucosa, adenomas and adenocarcinomas. Cell staining intensity was scored as negative (0), weak (1+), moderate (2+), and strong (3+). Tumours showing antigen expression showed intratumoural heterogeneity in the intensity of staining. For each case, the percentage of cells with the predominant staining intensity was recorded. For statistical purposes, cases which were moderately (2+), or strongly (3+) positive in more than 10% of cells were designated overall positive, while cases in which expression was weak (1+) or seen in less than 10% of cells were considered overall negative. Slides were scored independently by two investigators (J. Wang and M.El-Bahrawy), and disagreements were resolved by review and a consensus was reached. Statistical Analysis. The presence of significant differences in expression of beta-catenin, p53, bcl-2, cyclin-D1, caspase-3, and CD10 between adenomas, carcinomas, and nonneoplastic mucosa was assessed by chi-square ( 2 ) test. Ki-67 indices were compared between adenomas, carcinomas, and nonneoplastic mucosa by using the Mann-Whitney test, with a score of >1.96 or <−1.96 significant at < 0.05. A different statistical test was used for the Ki-67 indices, as Ki-67 were scored as percentages of cells, rather Gastroenterology Research and Practice 3 than assigned overall positive or negative scores. Therefore a nonparametric numerical statistical analysis, instead of a categorical one, was used. All statistical analyses were performed using the statistical functions in Microsoft Excel. 3.1. Histology. The adenomas comprised 140 adenomas with low grade (mild or moderate) dysplasia and 12 adenomas with high grade dysplasia. The carcinomas comprised 19 moderately differentiated and poorly differentiated adenocarcinomas. For comparison, 71 sections of nonneoplastic mucosa were analysed. Of these, 66 sections were adjacent to either adenomas or adenocarcinomas, while 5 were sections away from tumours. Expression of Markers in Nonneoplastic Epithelium. Membranous and cytoplasmic staining for beta-catenin positivity was seen in all cases of nonneoplastic epithelium (100%, Figure 1(a)). The membranous and cytoplasmic staining was most pronounced in the surface mucosa. In contrast, nuclear positivity was only seen in 6 cases of nonneoplastic mucosa (9.0%). Where nuclear staining was present, it was located at the base of the crypts, presumably in stem cells. No difference in staining was noted for nonneoplastic epithelium adjacent to or away from tumours. Staining for p53 was not seen in any of the sections of nonneoplastic epithelium (Figure 1(d)). Bcl-2 staining was only seen in one section (1.4%), where positivity was seen in the cytoplasm and nuclei (Figure 1(g)). Cyclin-D1 was present in 3 cases (4.5%), with staining predominantly in the nuclei, with some staining in the cytoplasm (Figure 1(j)). Caspase-3 staining was not seen in nonneoplastic epithelium (Figure 1(m)). CD10 positivity was present in 4 cases (6.1%) and seen in the cell membranes ( Figure 1(p)). The Ki-67 positivity in nonneoplastic epithelium was 5%, which was predominantly in cells in the basal one-third of the crypts (Figure 1(s)). Expression of Markers in Adenomas. In adenomas, membranous and cytoplasmic expression of beta-catenin was seen in all cases (100%, Figure 1(b)). However, there was also nuclear staining in 147 cases (99.3%). Compared with nonneoplastic epithelium, there was a significantly increased frequency of nuclear staining for both low and high grade adenomas combined ( 2 > 25, < 0.01). There was no significant difference in the frequency of nuclear staining between low grade and high grade lesions, 99.3% and 100%, respectively ( 2 = 0.003, = 0.99). Expression of Markers in Carcinomas. Beta-catenin staining was seen only in the cytoplasm and nuclei of carcinoma cells (Figure 1(c)), being positive in 14 cases (93.3%). Compared with adenomas, there was no statistically significant increase in frequency of nuclear staining ( 2 = 0.03, = 0.87). Expression profiles of the different molecules in nonneoplastic mucosa, adenomas, and carcinomas are summarised in Table 2. Discussion The current results confirm our previous findings that while the expression of beta-catenin was confined predominantly to the cell membrane in nonneoplastic epithelium with weak staining in the cytoplasm, in adenomas and carcinomas, there was nuclear staining as well as increased cytoplasmic staining in more than 90% of cases [7]. In this current study, we show that this expression profile was present in both low grade and high grade adenomas as well as carcinomas, with similar percentages (93%-100%). This shows that nuclear localisation of beta-catenin is an early phenomenon in the adenomacarcinoma sequence in FAP patients. This is expected due to the germline APC mutation in these patients. Furthermore, cytoplasmic and nuclear beta-catenin is also frequently seen in sporadic adenomas and carcinomas [8,9]. There do not seem to be differences in nuclear positivity frequencies between sporadic and FAP tumours (both adenomas and carcinomas). Mutations with subsequent accumulation of the mutated TP53 gene product are well recognised as a late event in the adenoma-carcinoma sequence for sporadic tumours. However, for FAP patients, previous studies have shown TP53 mutations occurring in adenomas [10]. Immunohistochemistry studies also showed that 37% of adenomas from FAP patients expressed p53, compared with 20% of sporadic adenomas [11]. In our study, we found that 7.1% of low grade adenomas expressed p53, compared with 25% of high grade adenomas and 70% of invasive carcinomas. These results suggest that although some p53 overexpression occurs in early lesions, the majority of the changes occur in the transformation from high grade dysplasia to invasive tumours. Although we did not compare our results directly with sporadic tumours, our results suggest that the incidence of p53 overexpression in FAP neoplasms is similar to that recognised for sporadic colorectal tumours, where 32% of adenomas and 67% of carcinomas expressed p53 [12]. The expression of bcl-2 has been studied extensively in sporadic colorectal tumours, but not in FAP patients. There has been only one study of bcl-2 in FAP patients, but only adenomas were analysed. In sporadic tumours, several studies have shown consistent results, with bcl-2 expressed in 80%-90% of adenomas, but the expression was reduced to 30%-50% in carcinomas [13,14]. Our study shows 48.7% of adenomas being positive for bcl-2 but only 5.6% positivity in carcinomas. The difference in the percentages in comparison with sporadic tumours may be due to differences in scoring criteria, but may also be due to actual decrease in bcl-2 expression in FAP lesions. Nevertheless, there is a similar trend in the downregulation of the antiapoptotic protein with invasion seen in FAP patients, as for sporadic tumours, and this reflects similar apoptotic signalling changes in both FAP and sporadic tumours. Caspase-3 is a protease that interacts with other caspases and is a key component of the executioner apoptotic pathway. In this study, we found that caspase-3 expression is absent in nonneoplastic epithelium, but positive in 72.2% of low grade adenomas, 50% of high grade adenomas and 75% of carcinomas. This is the first study to show caspase-3 expression along the adenoma-carcinoma sequence in FAP patients and to show that caspase-3 expression occurs early in the sequence, but persists into invasive carcinomas, unlike the expression of the antiapoptotic protein bcl-2. There have been few studies on caspase-3 in colorectal cancers in general. One study showed increased expression in adenomas compared with nonneoplastic mucosa, but decreased in carcinomas compared with adenomas [15]. However, it has also been shown that caspase-3 expression correlated with a higher risk of recurrence [16]. CD10 is a cell surface metalloprotein endopeptidase, involved in cleaving cytokines. It was initially discovered to be expressed in various lymphomas, but has subsequently been found widely expressed in various mesenchymal cells and mesenchymal-derived tumours. CD10 expression in sporadic colorectal tumours has been rarely studied, and to date there are no published data on CD10 in FAP patients. We found that CD10 is rarely expressed in nonneoplastic epithelium or adenomas (0.8%-9.1%). In contrast, 40% of carcinomas expressed CD10. These results are similar to data on sporadic tumours, with CD10 expression associated with an invasive phenotype rather than adenomas, with more than 50% of carcinomas expressing CD10 [17][18][19]. Furthermore, CD10 expression in colorectal cancers is associated with increased invasiveness, lymph node involvement, and liver metastasis [20,21]. These results suggest that FAP tumours have similar mechanisms of acquiring invasive potential as sporadic tumours. Since cyclin-D1 is a downstream target of beta-catenin, we expected cyclin-D1 expression to be increased in both adenomas and carcinomas. This was shown in our study, where 88.1% of low grade adenomas, 91.7% of high grade adenomas, and 81.2% of carcinomas were positive, compared with 4.5% of neoplastic epithelium. A previous study on FAP adenomas showed 40% (±20%) expression of cyclin-D1 in adenomas, compared with no expression in nonneoplastic mucosa [22]; however, no carcinomas were studied. In contrast, another study showed increased expression in sporadic carcinomas, but adenomas were not studied [23]. Nevertheless, these results show that increased cell proliferation may be linked to beta-catenin dysregulation. Finally, while the normal Ki-67 proliferation index is 5% in nonneoplastic epithelium, with Ki-67 staining seen mostly in the base of crypts, the Ki-67 index is 37% in low grade adenomas, 32% in high grade adenomas, and 41% in invasive carcinomas; this early increase in cell proliferation mirrors the increased expression of beta-catenin and cyclin-D1. This is expected as these factors drive cell proliferation. This is seen also in sporadic adenomas and carcinomas, where the Ki-67 index was 30% and 38% in adenomas and carcinomas, respectively [12]. Overall, we find that beta-catenin and related proliferative factors (cyclin-D1 and Ki-67) are overexpressed in adenomas. Our data suggest that increased cell turnover is an early event in the FAP adenoma-carcinoma sequence, as the frequency of these factors is significantly increased in adenomas compared with nonneoplastic epithelium and further increased in carcinomas compared with adenomas. In contrast, changes in the apoptotic pathway appear to be more complex in the sequence of events. While the apoptotic effector protease caspase-3 is upregulated in adenomas and persists in carcinomas, the antiapoptotic regulator bcl-2 is upregulated in adenomas, but downregulated in invasive carcinomas. In contrast, p53 overexpression occurs more commonly later in the sequence, in invasive carcinomas. A recent study suggests that the Wnt signalling results in increased apoptosis via increased expression of the proapoptotic factors bok and bax [24]. However, Wnt signalling also induces antiapoptotic factors bcl-W and bcl-2 expression [25,26]. Since both pro-and antiapoptotic factors are upregulated in adenomas, tumour progression is likely caused by a balance between these factors. Later, as tumours become invasive, it may be that p53 mutations occur which substitute for the effect of bcl-2 in counteracting the proapoptotic factors. Interestingly, several previous papers also have reported an inverse relationship in p53 and bcl-2 expression in adenomas and carcinomas [14,27]. Furthermore, both p53 overexpression and loss of bcl-2 were associated with poorer prognoses [28,29]. Conclusions In conclusion, our study shows that the immunophenotypic changes in proliferation and apoptosis markers seen in FAP tumours are similar to those seen in sporadic tumours, suggesting similar tumour initiation and progression pathways. The results also suggest that disturbances in apoptosis are more influential factors in tumour progression in comparison to cell proliferation, which seems to be initiated in the early stages of tumourigenesis. We therefore recommend that these apoptotic pathways be the point of focus for further research and development of therapeutic targets and agents.
2018-04-03T00:11:01.792Z
2013-02-13T00:00:00.000
{ "year": 2013, "sha1": "40a9c047156d431281a5742b24403f75a79caae8", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/grp/2013/107534.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01b436dc74b5f88f2bf2fdc085b977f4717f4bec", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224803786
pes2o/s2orc
v3-fos-license
Effects of Aquatic Exercise on Dimensions of Quality of Life and Blood Indicators in Patients with Beta-Thalassemia Major Background: Thalassemia is considered as a group of genetic blood disorders, characterized by anemia. The present research aimed at evaluating the effects of aquatic exercise on quality of life and blood indices in patients with beta-thalassemia major. Methods: A clinical trial study involving 40 patients with thalassemia major, divided into two groups: experimental and control. The tools used to collect the data included demographic information questionnaire, blood indicators questionnaire, and SF-36 quality of life questionnaire. The experimental group performed exercise in water three times per week for 8 weeks in the pool after obtaining the consent. In this research, the quality of life questionnaire was filled out 24 h before the intervention, 24 h after the last session of the exercise program, and 2 months after the end of the exercise program. Results: The current research revealed that exercise in water affected the quality of life, hemoglobin, hematocrit, iron and ferritin of serum such that the mean score of quality of life and blood indicators in the study showed a significant difference in the experimental group. Conclusions: The use of a regular exercise program combined with drug therapy and blood transfusion can be useful in the treatment of beta-thalassemia patients. This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. For reprints contact: reprints@medknow.com such as unpleasantness, sadness, and distress caused by health problems, concern related to premature death, social isolation and depression, physical problems, prolonged and frequent diet therapy, hospitalization and absenteeism from the workplace or school, and the economic problems caused by the cost of hospitalization and the use of iron removal treatments. [14,15] Physical and psychological problems in this group of patients result in hopelessness and reduced social functioning and social relationships, and finally, reduced quality of life. [16] Improving the quality of life of the patients is considered an important motivation in this regard since developments in new treatments and clinical planning have nowadays increased the life span of patients. [14][15][16] Exercise, especially aerobic activities, improves the quality of life, family self-esteem, social communication, and the sense of well-being. [17] Group activities such as exercise can enhance the ability of patients to perform daily activities without dependency on others. [18] It also has a positive impact on the quality of life dimensions including role-playing, cognitive and physical preparedness, reduced physical pain, vitality and well-being, emotional performance, mental health, cognitive functions, and flexibility of chronic patients. [19,20] The results of a study conducted by Kargarfard et al. revealed that the quality of life dimensions was significantly increased in the experimental group after 8 weeks of exercise in water, compared to that of the control group. [21] In addition, the results of a study by Ortega et al. show that an increased IL-6 was found to be associated with a decrease in a spontaneous release of TNFα after a 4-month exercise program. At the completion of the program (8 months), monocytes in the FM patients decreased spontaneous production of pro/ anti-inflammatory cytokines along with the spontaneous release of IL-1β and IL-6 to HW; however, the production of TNFα was low and that of IL-10 was high. The production of TNFα, IL-1β, IL-6, and IL-10 induced by lipopolysaccharide, was reduced at the completion of the exercise program; however, IL-10 was still higher than HW. The anti-inflammatory effect of the exercise program was also approved by a decreased concentration of circulating CRP. The exercise enhanced the health-related QOL in patients with FM. [22] The results of another study indicated the positive effect of exercise in the water on the quality of life of the patients. [19] Kargarfard et al. showed that exercise in the water had a significant effect on domains of autonomy and communication with others but it showed no significant effect on environmental domination, individual growth, and self-acceptance. [23] However, the general investigation of psychological well-being revealed a significant difference between the posttest scores of the psychological well-being of the subjects. [23] The results of a study conducted by Arian et al. indicated that the planned program of walking had a positive impact on the quality of life of the patients suffering from thalassemia major. [24] Thalassemia adolescents suffer from more depressive symptoms and lower quality of life compared to patients with short-term injuries. [13] Thus, improvement of the psychological state of adolescents can reduce depression and improve the prognosis of these patients. Proper nursing care can enhance the quality of life. [3,14,24] No proper exercise program has been designed yet for thalassemia patients. Besides, a limited number of studies has been conducted on the effect of exercise on the quality of life in the patients with thalassemia in Iran, especially in Chaharmahal and Bakhtiari province in southern Iran, where thalassemia has high prevalence, despite the recommendation of physicians and physiotherapists on the positive effect of exercise, especially exercise in water, on treatment and prevention of diseases. Moreover, no research has been carried out yet on the effect of exercise in the water on these patients. Hence, the aim of our study was to investigate the impacts of aquatic exercise training on quality of life and blood indices in patients with beta-thalassemia major in southern Iran in 2016. Methods This research is considered a clinical trial study. With ethical code IRCT2017021013768N12 IR.SKUMS. REC.1395.119, it was conducted in Shahrekord in summer of 2016. The research population included the beta-thalassemia major patients admitted to Hajar Hospital in Shahrekord for treatment and transfusion. A total of 40 patients with beta-thalassemia major were enrolled in this research. Inclusion criteria Age over 12 years, with major thalassemia, examination by the treating physician, confirmation of participation in the study, no smoking addiction, ability to understand education, having the desire and motivation to participate in the exercise program, participate in the research, ability to exercise, lack of physical illness or history of any particular illness that may be compromised by performing a health exercise (such as cardiovascular, respiratory, mental, skin, osteoarthritis, etc.), lack of regular participation in or out of the program, and ability to attend training sessions regularly were considered as the inclusion criteria of the study. Exclusion criteria Deteriorating health status and unwillingness to cooperate constitutes the exclusion criteria of the study. After obtaining the written consent from the patients, the ones who met the inclusion criteria of the research were selected by using a convenient sampling method. They were homogenous in terms of marital status, level of education, age, and job. Later, the patients were randomly divided into two groups of experimental and control (each contained 20 patients). Sysmex device was used to measure the level of hemoglobin and hematocrit. The hematocrit level was also calculated by the computational method. Cobase device (made in Japan) was used to measure the ferritin. Luminescence and calorimetry quantitative measurement method was used to measure electrochemical ferritin concentration. BT3500 biochemistry device (made in Italy) was used to measure iron and TIBC. The photometric method was used for quantitative measurement of iron and TIBC concentration. The colorimetric kit was also used to measure iron. The questionnaire used in this research consisted of two sections. The first section included general information such as age, gender, marital status, education level, job, hemoglobin, hematocrit, iron, serum ferritin, and TIBC. The second section included the SF-36 questionnaire based on physical function, physical pain, and functional limitation caused by physical problems, general health and mental health, feeling vitality, limited function due to emotional issues, and social functioning. It scored from 0 to 100. [25] Montazeri et al. and Eshaghi et al. measured the validity and reliability of this questionnaire. The reliability coefficient of the questionnaire was found to be 0.7 using Cronbach's alpha method. [26,27] The research patients filled out the questionnaire before and after the study under the supervision of the researcher. After coordinating with the subjects in one 60-min session, the researcher and the sports expert provided the exercises by displaying them for the experimental group. Then, the experimental group performed the exercises for 8 weeks and the patients continued their treatment routinely during the intervention. The control group did not participate in any exercise and they received their routine treatment while the patients of the experimental group performed the exercises in water for 2 months, three sessions of 60 min each per week at a temperature of 28 to 30°C. In the semi-deep part of the small pool of a day care facility under fitness instructor and supervised by the researchers. Moreover, the participants received medical permission from their physician to exercise. Aquatic exercise training The way of exercising on water and the water properties were explained to the subject's on the first day of the program that lasted 20 min. After learning the skill of controlling the body in water and walking in water, the subjects started the exercise. The rest of the sessions lasted for an hour in three sections. Warming up: This stage lasted 10 to 15 min. In this stage, the subjects prepared themselves for the main exercise program by walking in the pool. The main stage involved performing the aerobic and resistance exercises for 35 to 40 min. The exercises were repeated three times every 10-15 min. Cooling down: At the end of each exercise session, the subjects performed activities to return to the initial state including walking and simple movements with low intensity and stretching movements. This stage lasted 5-10 min. [28] These exercises continued up to the patients' tiredness, and if the patients were tired sooner than the specified time during one session, they would not continue to exercise that day. The intensity of the water exercise commensurate with the patients' strength. Furthermore, the researchers controlled the intensity of training by measuring the symptoms of weakness, body temperature, respiratory rate, and blood pressure daily before and after the exercise. Thus, 24 h after the end of the eighth week [29,30] and 2 months after the end of the intervention, the quality of life questionnaire was refilled out in both groups. SPSS16 software was used to analyze the results. Further, descriptive statistics (including mean, standard deviation, frequency, and percentage) and statistical analysis (including independent t-test, paired t-test, Fisher's exact test, and repeated measures of ANOVA test) were used. P value at 0.05 was considered significant. Results The study results revealed that the majority of the sample in both groups were females, being single with a high school diploma and unemployed. Both groups were similar in demographic characteristics (P > 0.05) [ Table 1]. Paired t-test for blood indices showed that the mean level of hemoglobin, hematocrit, iron, and ferritin in the control group before and after the study were not significantly different [ Table 2]. However, paired t-test revealed that the mean level of hemoglobin in the experimental group from 8.57 ± 1.24 to 9.46 ± 1.24 was significantly increased (P < 0.05). For other blood parameters: increased hematocrit, decreased iron and ferritin were not significantly different before and after the intervention [ Table 3]. The results showed that the mean score of iron and ferritin were reduced after the intervention in the experimental group while the mean score of iron was decreased in the control group. However, this difference was not statistically significant in the experimental and control groups (P > 0.05) [ Table 4]. The mean score of quality of life and eight scales and statistical results of repeated measures of ANOVA between groups at each time interventions are given in Figure 1 and Table 5. Increase in mean score quality of life and eight scales at each time between groups were not significant (P > 0.05) [ Figure 1 and Table 5]. Discussion The results of the study revealed that exercise in water caused an increase in hemoglobin and hematocrit in the experimental group but it reduced iron and ferritin compared to before the intervention. The results of the research conducted by Hiedari showed a slight increase in hemoglobin and hematocrit of female athletes and nonathletes after 5 weeks of aerobic exercise with a 60-80% HR max . [31] The results of the research conducted by Engy showed that exercise improves hemoglobin and reduces fatigue. The results of the study conducted by Durbin et al. showed that 7 weeks of aerobic exercise increased the level of HCT, Hb, RBC, and VO2 peaks. In addition, Kratz et al. showed that exercise increased the level RBCs, Hb, and HCT. [32] The results of another study showed that the body mass index, waist circumference, mean arterial pressure, and diastolic blood pressure significantly decreased after aerobic and resistance exercise but the HDL, fasting plasma glucose, cholesterol, or systolic blood pressure did not change significantly in the exercise groups. [33,34] In addition, the results of a study performed by Salianeh et al. showed that RBC count increased in both groups. These changes were higher in the periodic exercise. [35] Hefernan did not show a significant change in the levels of hemoglobin and hematocrit in inactive men. [36] The results of a study performed by Ramezanpour et al. showed that exercise increased the iron, ferritin, and transferrin of serum. [37] In contrast to the results of this research, the results of the research conducted by Hoseinpour Motlagh et al. showed that 8 weeks of resistance exercise increased muscle strength and reduced the number of red blood cells and hematocrit but it did not show any effect on concentration of fibrinogen, number of platelets, and the number of white blood cells. Resistance exercises need to be performed for longer periods of time to increase the effectiveness of resistance exercises on the concentration of fibrinogen and other blood cells. [38] Eastwood et al. showed that exercises did not affect hemoglobin. [39] In addition, the results a study on the dimensions of physical function, role-playing with regard to physical state, and role-playing with regard to psychological state showed that the mean scores of the physical performance were significantly different in two groups at the studied times (P < 0.05). The current research results revealed that exercise in water increased the quality of life in the experimental group so that the mean score of quality of life in the studied time period showed a significant reduction. These results are in-line with other studies. [40][41][42] The results of other research showed that exercises improved the quality of life. [16,24,40] Moreover, the results of the study revealed that the mean total score of quality of life in the experimental group was increased after the intervention. Thus, the mean score of the total quality of life after the intervention was significantly different from that before the intervention. The results of the research carried out by Arian et al. revealed that the planned walking program has an impact on the quality of life of the patients suffering from thalassemia major. The planned walking program is recommended for thalassemia patients to improve the quality of life of them. [24] An exercise program can improve the quality of life by increasing exercise capacity. [24,40] Evidence has shown that exercise and fitness improve the human body. In children and adolescents, the effects of regular exercise with physical activity can lay a positive foundation on a healthy future life. Aerobic training has been particularly shown to improve cardiovascular fitness. Other positive effects are a decrease in fat mass and an increase in insulin sensitivity. This physiological condition is essential in preventing various diseases such as diabetes, metabolic syndrome and more generally major cardiovascular diseases. [43] Sports rehabilitation is effective in the mental health of the patients and in increasing the self-confidence, well-being, feeling intimacy and happiness, and reducing depression and anxiety. In general, it improves the quality of life of patients. [44] Exercise can reduce the risk factors in chronic diseases and improve the health as well as behavioral intervention, it can play a key role in improving the mental state and improving the quality of life of patients. [45] The research results suggest that thalassemia affects the quality of life hence, the majority of thalassemia patients have a lower quality of life. It also negatively influences the physical, psychological, social, economic state, and the mental image of the patients. It seems that a planned and vibrant program in the environment might improve the quality of life of patients during the exercise in water. Moreover, patients' more communication with other people improves their social functioning. Learning exercise in water and the ability to float in the deep parts of the pool enhances the self-esteem of patients. Increasing endurance, improving performance and the ability to perform more activities, and reducing fatigue may be effective in improving the quality of life of the subjects. Improving the respiratory status of patients and their quality of life can be effective. Performing group-based exercises in water create a sense of sympathy with symptoms of the disease and self-confidence among the patients. [46] Lack of observed significant increase in the mean score of quality of life in this research might be attributed to a low number of samples and the social and economic status of subjects and individual differences in spiritual beliefs, and the low number of exercise sessions. However, no significant difference was found between the mean scores of fatigue, vitality and feeling recovery, social functioning, physical pain, and general health at studied time periods. In contrast to the results of our research, another study showed that exercises reduced the fatigue and improved the quality of life dimensions so that the patients who performed exercise showed a lower level of pain and fatigue, better job performance, and they were generally happy people. [45] Conclusions A regular aquatic exercise program helps patients maintain or increase the level of performance. The results of the present research suggest that providing an exercise program, especially aerobic exercises, in the patients with thalassemia major, along with the main treatment of disease, blood transfusion, psychiatric counselling and life skills training can improve the social relations, prevent social isolation and depression, and improve the quality of life and blood. Study limitation Individual differences in diet and blood transfusion time can affect the actual amount of blood factors, especially ferritin, haemoglobin, and iron. Also, differences in spiritual beliefs, cultural values and attitudes toward illness can affect patients' performance. Acknowledgments Special thanks go to all participants as well as Shahrekord University of medical sciences. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-08-20T10:08:59.840Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "958b8a71650372c34ed4b2237f3aa07c62205100", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijpvm.ijpvm_290_19", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "36b9f6ea4b56adaaef7d6c68c45526ca5a2856f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54944355
pes2o/s2orc
v3-fos-license
Performance Analysis and Optimal Allocation of Layered Defense M / M / N Queueing Systems One important mission of strategic defense is to develop an integrated layered Ballistic Missile Defense System (BMDS). Motivated by the queueing theory, we presented a work for the representation, modeling, performance simulation, and channels optimal allocation of the layered BMDS M/M/N queueing systems. Firstly, in order to simulate the process of defense and to study the Defense Effectiveness (DE), we modeled and simulated the M/M/N queueing system of layered BMDS. Specifically, we proposed the M/M/N/N and M/M/N/C queueing model for short defense depth and long defense depth, respectively; single target channel and multiple target channels were distinguished in each model. Secondly, we considered the problem of assigning limited target channels to incoming targets, we illustrated how to allocate channels for achieving the best DE, and we also proposed a novel and robust search algorithm for obtaining the minimum channel requirements across a set of neighborhoods. Simultaneously, we presented examples of optimal allocation problems under different constraints. Thirdly, several simulation examples verified the effectiveness of the proposed queueing models. This work may help to understand the rules of queueing process and to provide optimal configuration suggestions for defense decision-making. Introduction These years, ballistic missile (BM) technology has spread to more and more countries.Nations all over the world are developing missiles capable of reaching enemies.One important mission of strategic defense is to develop an integrated layered BMDS to defend homeland, deployed forces, allies, and friends from ballistic missile attacks [1].The BMDS is based on a multilayer defense concept and therefore contains more than one defense weapon; it will include different types of defense weapons located on land or ships used to destroy ballistic missiles [2].Layered BMDS has two advantages: (1) interception mainly can be divided into 3 phases: boost phase, midphase, and reentry phase.Since the reentry phase is too short and it is the last chance for a shot, BMDS should not rely on a single defense weapon but on defense weapons placed at different locations forming a layered BMDS; the layered BMDS allows for more shot opportunities that will certainly increase the probability of a successful interception [3].(2) For given affordable BMs penetration probability (or expected kill probability), cooperation between different missile defense weapons may reduce the expected resources consumption and provide an efficient way of using interceptors.The common methods used in the research on the process simulation and performance evaluation of missile defense are the mathematical programming method [4,5], the probability calculation method [6], the system simulation method [7], the Markov method [8,9], and so forth. Queueing theory is a mathematical theory of stochastic service system which was first proposed by Erlang [10].Queueing systems have a wide range of applications, such as resource allocation [11], system optimization [12], and communication planning [13].Similarly, in order to make full use of defense capabilities, queuing theory also has a lot of applications in defense weapons operation research; it can solve problems of weapons configuration or efficiency analysis [14][15][16].Following are the two questions that need our attention: (1) there are many factors that affect DE, such 2 Mathematical Problems in Engineering as the number of layers, the number of defense weapons, and Single Shot Kill Probability (SSKP); these are also factors that affect the requirement of defense weapons; how are defense weapons, number of layers, BMs, SSKP, and DE interrelated and how can we understand this relationship for achieving the best allocation plan?(2) If we have deployed different types of defense weapons, then how do we deal with them? Using M/M/N queueing system to simulate the missile defense process is feasible; the reasons are as follows: (1) the Poisson process has the simplest mathematical expressions, though BMs arrival is not fully consistent with the Poisson process; it represents the most difficult scenario (worstcase scenario) for the BMDS to deal with.As long as the BMDS can deal with Poisson arrivals, it has certain adaptability to other types of arrivals distribution.(2) BMs usually have fixed and highly predictable trajectories, though some of them may have limited maneuvering potential; we think this has no influence on our research.(3) The incoming directions, firing tactics, technical characteristics, and time intervals of BM arrivals have some Poisson features; these can be viewed as customers waiting to be served by servers.(4) The targets capacity (number of target channels, servers) and shooting times for each target (service times) are limited.When BMs that arrived find that all channels are occupied (not idle) or there was little time for a shot, they will penetrate the BMDS directly.In summary, the M/M/N queueing system can be used to analyze the DE of BMDS, summarize the rules of defense, and provide suggestions for system configuration for defense decision-making.The remainder of the paper is structured as follows.Section 2 proposes the framework for M/M/N queueing model.Section 3 discusses M/M/N queueing models.Section 4 is dedicated to optimal allocation of target channels.Section 5 provides numerical examples.Section 6 includes concluding remarks and future work. M/M/N Queueing Framework We consider an M/M/N queueing system with BMs arrival rate and shooting rate of defense weapons.M stands for "memoryless" or "Markovian" and means that the process being represented by M comes from an exponential distribution [17]. (1) Suppose that BMs arrive randomly and independently of each other to a defense weapon and that the average rate at which they arrive is given by the parameter [18]; that is, This is known as a Poisson arrival process; () is the probability that BMs arrive within time .Suppose that the time intervals between arrivals are randomly taken from the exponential distribution with parameter ; their probability density function and distribution function are The exponential distribution is memoryless, which indicates that the BMs arrivals are random. (2) Suppose that the BMs are shot in the order of their arrivals; the shooting time for a BM is also exponentially distributed at rate ; then, its probability density function and distribution function are where = 1/ mean , where mean is the mean shooting time.The shooting time depends on the reaction time of the defense weapon and the time of interceptor flying from the launch point to the calculated encounter point, which is related to the technical capabilities of defense weapons. Introducing = /, < 1 means that the queue is stable if the mean shooting time is less than the mean arrival time intervals; it can be understood as the firing density (shooting intensity) [19]. (3) Suppose that the waiting times of BMs are also exponentially distributed at rate ]; then their probability density function and distribution function are where = 1/ mean , where mean is the mean waiting time. (4) Additionally, if there is an idle target channel when a BM arrives at the system, then the defense weapon will shoot it immediately.In this paper, we divided the queueing system into two types: (1) loss system (when BMs that arrived find that all target channels were occupied (not idle), they will penetrate the BMDS directly (leave the system without service)) and (2) mixed system (when BMs that arrived find that all target channels were occupied, BMs will not be penetrated but will wait for a limited time (depending on the time of BMs flying in the killing zone of BMDS) until a target channel becomes available).We use the term "defense depth" to distinguish between the loss system and the mixed system."Short defense depth" is defined as the case when waiting times of BMs are shorter than shooting times of defense weapons (loss system), and the "long defense depth" is defined as the case when waiting times of BMs are longer than shooting times of defense weapons (mixed system), as shown in Figure 1. 0 : all target channels are idle; there is no BM in the system. M/M/N Queueing Models 1 : 1 target channel is busy; there is 1 BM in the system. . . . : k target channels are busy; there are BMs in the system. . . . : all target channels are busy; there are BMs in the system. Figure 2 is the state transition diagram of M/M/N/N system.In Figure 2 0 : all target channels are idle; there is no BM in the system: 1 : 1 target channel is busy; there is 1 BM in the system: 2 : 2 target channels are busy; there are 2 BMs in the system: . . . −1 : − 1 target channels are busy; there are − 1 BMs in the system: : target channels are busy; there are BMs in the system: Due to the fact that is the probability of the BM found by the radar, we have is the mean number of BMs found to arrive during the mean shooting time.Because 0 + 1 + 2 + ⋅ ⋅ ⋅ + = 1, then the expression for 0 can be written in the form The optimal operation of the queueing system can be analyzed through several performance parameters, some of which follow. Single Target Channel and M/M/1/1 System.The M/M/1/1 queueing system can be viewed as a special case of Section 3.1.1for = 1; then, the expression for 0 can be rewritten in the form Performance parameters are as follows: (1) BM loss probabilities: (2) Relative probabilities of BMs that will be shot: (3) Absolute probabilities of BMs that will be shot: The concept of DE is the product of probabilities of BM that will be shot times SSKP; that is, Identical Weapons, Long Defense Depth, and M/M/N/C System.From Section 1, we know that BMDS with long defense depth can be regarded as a stochastic service system with limited waiting time, that is, the mixed queueing system.For each incoming BM, the defense weapons use idle target channels to shoot.When BMs that arrived find that all target channels were occupied, BMs will not be penetrated but will wait for a limited time until a target channel becomes available.Possible states of the system are as follows: 0 : all target channels are idle; there is no BM in the system. 1 : 1 target channel is busy; there is 1 BM in the system. . Then, we have the constraint that ∑ ∞ =0 = 1 for (21).Let = / be the mean number of BMs that arrived during the mean shooting time; let = ]/ be the mean number of penetrated (leaving without being shot) BMs during the mean shooting time.Then, ( 21) can be written in the form When ≤ , we have When > , we have Targets capacity BMs waiting to be shot The constraint for (24) is With direct substitution of ( 23) and ( 24) into (25), it follows that )) , Then, with substitution of ( 26) into ( 24), we have steady-state probability Performance parameters are as follows: (1) BM loss probabilities ( BMs waiting to be shot): ) . (28) (2) Relative probabilities of BMs that will be shot: (3) Absolute probabilities of BMs that will be shot: (4) Mean number of occupied target channels: The mean occupancy rate of target channels is DE is the product of probabilities of BM that will be shot times SSKP; that is, Similarly, the M/M/1/C queueing system can be viewed as a special case for = 1. Different Defense Weapons. Defense weapons in Sections 3.1 and 3.2 are identical, and the BMDS may deploy different types of defense weapons.For different types of defense weapons, the waiting time of BMs, the detection probability of radars, and SSKP may be different.In order to model the queueing system, we choose one type of defense weapon as the reference and then substitute the reference defense (type ) for defense weapons of type .The "replacement process" is named equivalent replacement method [20]; basic equations are Subscripts (⋅) () and (⋅) () in (34) are used to distinguish defense weapons of type from defense weapons of type .From the above equations, we can substitute () for () ; the intensity of BMs killed by defense weapons of type and the intensity of BMs arrivals for defense weapons of type are respectively, where denotes the total number of types.Let be the number of layers; the number of target channels deployed along the th layer is denoted by ; the probability of BMs that will be shot by the -layer defense with short defense depth is Optimal Allocation of Target Channels where is the total number of target channels, = ∑ =1 .The probabilities of BMs that will be shot by the layer defense with short defense depth are as follows: 1st layer: , 1 = (1) shoot ⋅ . (37) 2nd layer: Mth layer: DE of the whole -layer BMDS is Then, we define the optimization problem as finding numbers of layers and target channels deployed along the th layer so as to maximize DE, subject to a given set of constraints: For the nonlinear optimization problem (41), when problem size is small, we can use algebra, dynamic programming, or enumeration method to solve it.When the size of the problem is very large, an approximate solution can be obtained by using some advanced algorithms, for example, genetic, neural network, and heuristic algorithms.In order to get some potential and useful allocation rules, we analyze a scenario.Scenario 1. Suppose that the number of target channels is 10, SSKP is 0.7, the probability that BMs will be detected by radars is 0.8, = 5 BMs/min, and mean = 0.75 min.Tables 1, 2, and 3 are the DE of two-layer, three-layer, and four-layer defense, respectively. Theorem 1.Let E (M,n (M) ) be the DE of the M-layer BMDS; n (M) = ( 1 , 2 , . . ., ) is the allocation plan, is the number of target channels deployed along the th layer, and = ∑ =1 , ≥ 1.When is constant, then max E (M,n (M) ) is stochastically increasing as increases; that is, Proof of Theorem 1 is similar to Lemma 1 in [8].Now, we continue to compute DE of = 5, = 10 and = 6, = 10.Tables 4 and 5 are the DE of five-layer and six-layer defense, respectively.Another useful rule is that the number of target channels deployed along the th layer should be not less than (+1)th layer; this rule is summarized in Theorem 2. Minimum Requirements of Target Channels. The requirements of target channels necessary to achieve a demanded DE can be viewed as a dual problem of (41).Assuming that the DE is held at greater than * , we define the optimization problem so as to minimize the requirements of target channels, subject to a given set of constraints: For the nonlinear optimization problem (49), when problem size is small, we can use algebra, dynamic programming, or enumeration method to solve it.When the size of the problem is very large, an approximate solution can be obtained by using some advanced algorithms, for example, genetic and heuristic algorithms.Then, we give the definition of neighborhood [21]. Definition 3. Let n (M) = ( 1 , 2 , . . ., ) ∈ Ω be an allocation plan, and Ω is the feasible region of allocation plans.Suppose that n * (M) = ( 1 , 2 , . . ., −1 , − 1, +1 , . . ., ) ∈ Ω, where the number of target channels deployed along the th layer is Scenario 2. Suppose the SSKP is 0.7, the probability that BMs will be detected by radars is 0.8, = 5 BMs/min, mean = 0.75 min, and * = 65%; then the question becomes as follows: "What is the least cost of target channels to achieve a demanded DE?" Figure 4 is the schematic of search in the neighborhood of n (3) = (5, 3, 2) and n (4) = (4, 3, 2, 1).We can see the least cost is 9 channels for three-layer defense and 8 channels for four-layer defense, and allocation plans are n (3) = (4, 3, 2) and n (4) = (3, 3, 1, 1), respectively.So, we give a simple algorithm in finding the least cost of DWs to achieve a demanded DE; specific search methods are as follows.A feasible initial allocation plan is very important in this algorithm.Theorem 1 proposes the basic rule of finding an initial plan, which greatly simplifies the searching process. Step 2. Search in the neighborhood of n (M) , (n (M) ), and try to find an allocation plan 1 (n (M) ) satisfying the following: Step 3. Step 2 and repeat. Step 5. Output the allocation plan n (M) ; ∑ =1 is the least cost to achieve the demanded DE. Different Defense Weapons and Identical SSKPs. In this section, we consider different types of defense weapons.Let be the number of layers; the number of target channels deployed along the th layer will be denoted by , assuming that the total number of types is and defense weapons are identical in the same layer.As in Section 3.3, subscript (⋅) () indicates defense weapons of type ; then probabilities that BMs will be shot by the -layer defense with short defense depth are as follows: 1st layer: (1) shoot = 1 − ( (52) 2nd layer: th layer: DE of the whole M-layer BMDS is For certain types and numbers of target channels, we define the optimization problem as finding which type of defense weapon should be deployed on each layer so as to maximize the ED, subject to a given set of constraints: In order to get some potential and useful allocation rules, we analyze a scenario. Scenario 3. Suppose that the number of layers is 3, the number of defense weapon types is 2 (types I and II), (I) = (II) = 0.7, the probability that BMs will be detected by radars is 0.8, = 5BMs/min, mean(I) = 0.75 min, and mean(II) = 1 min.Table 6 is the DE of three-layer defense. It can be seen from Table 6 that plan n (3) = (I, I, II) has the biggest DE.We also found that n (4) = (I, I, I, II) is the best, and then we have Theorem 4. Proof.Suppose that we have an allocation plan n I Θ (M) = (I Θ , II Θ ), where mean(I Θ ) = mean(II) and mean(II Θ ) = mean(I) . The terms and that appear in the proof of Theorem 4 could be a factor in channels allocation; we extended Theorem 4 to obtain Lemma 5. Scenario 4. Suppose that the number of layers is 3, the number of defense weapon types is 2 (types I and II), the probability that BMs will be detected by radars is 0.8, = 5 BMs/min, mean(I) = mean(II) = 0.75 min, (I) = 0.8, and (II) = 0.6.Table 7 is the DE of three-layer defense.It can be seen from Table 7 that plan n (3) = (I, I, II) has the biggest DE.We also found that n (4) = (I, I, I, II) is the best, and then we have Theorem 6. Mathematical Problems in Engineering ((+1)+(1− (I) )−(+1)−(1− (II) )) . (69) Obviously, we have The result of Theorem 4 follows. Numerical Examples In this section, we use numerical examples to generate some insights into the performance of the proposed queuing models in Section 3. Firstly, using the formula in Section 3.1.2,we draw the relationship of the mean shooting time and intensity of BM arrivals (see Figure 5).Then, we consider using the formula in Section 3.1.1and calculate the loss probability of the M/M/N/N system.We draw the relationship of the BM loss probability and offered density of shootings with = 1, 2, 3, 4, 5, 6, 8, 12, 16, 20, 30, 40, and 50 (see Figure 6).From Figures 5 and 6, we can see that the probability of BM loss increases with the increasing mean shooting time and intensity of BM arrivals.We set two scenarios: scenario 1 (the number of total BMs is 30, the number of target channels is 3, SSKP is 0.7, the probability that BMs will be detected by radars is 0.8, = 3 BMs/min, and mean = 1 min ( = 1)) and scenario 2 (the number of total BMs is 50, the number of target channels is 8, SSKP is 0.8, the probability that BMs will be detected by radars is 0.9, = 5 BMs/min, and mean = 1 min ( = 1)).We will firstly simulate the M/M/N/N queueing system and use Matlab to get the figure of the performance of the two scenarios (see Figure 7).Then, we will secondly simulate the M/M/N/C queueing system and use Matlab to get the figure of the performance of the two scenarios (see Figure 8).Table 8 shows the results of operating parameters of the two queueing systems. In order to explore the changes in the relationship between the performance of systems and different factors, we adjust the parameters in scenario 1, that is, (a) increase the number of arriving BMs, (b) increase the intensity of arriving BMs, (c) increase the mean shooting time for each BM, and (d) reduce the number of target channels.Figure 9 is the queue length as functions of the number of BMs, and Table 9 shows the results of operating parameters of the adjusted queueing system. Through adjustment of the system configuration, we can observe and summarize the queueing system running condition.This can be useful for decision-making of BMDS operation control and adjusting system configuration.and continued to handle a broader array of queueing scenarios.Several areas for potentially valuable future research have emerged from this work; we suggest the following areas of further research [22][23][24][25][26]. (1) We proposed some approximation in our computations; an important question is the discussion of the accuracy of BM arrivals distribution. The Poisson arrival process is not the only fitting model for the provided queueing model in our paper.We will consider Bernoulli or Markov BM arrival process in our future research.This also includes relaxing the assumptions of exponential shooting times and allowing waiting times to vary by BM and defense weapon.(2) For the convenience of Figure 1 : Figure 1: Long defense depth and short defense depth. Figure 2 : Figure 2: State transition diagram of M/M/N/N system. Theorem 4 . Let E (M,n (M) ) be the DE of the M-layer BMDS; n (M) = ( 1 , 2 , . . ., ) is the allocation plan, is the number of target channels deployed along the th layer, and let 1 = 2 = ⋅ ⋅ ⋅ = .Allocation plan n I (M) = (I, II) indicates that defense weapons of type I are forward-deployed, and n II (M) = (II, I) indicates that defense weapons of type II are forward-deployed.Suppose that defense weapons are identical in the same layer, and (I) = (II) and (I) = (II) ; if mean(I) ≤ mean(II) , then one has E (M,n I (M) ) ≥ E (M,n II (M) ) . Lemma 5 . Let E (M,n (M) ) be the DE of the M-layer BMDS; n (M) = ( 1 , 2 , . . ., ) is the allocation plan, is the number of target channels deployed along the ℎ layer, and let 1 = 2 = ⋅ ⋅ ⋅ = .Allocation plan n I (M) = (I, II) indicates that defense weapons of type I are forward-deployed, and n II (M) = (II, I) sh oo tin g tim e In te n si ty o f B M ar ri v a ls Figure 5 : Figure 5: Performance parameters as functions of the mean shooting time and the intensity of BM arrivals (M/M/1/1 system). Target channel number = 1 Figure 6 : Figure 6: The probability of BM loss as functions of the number of target channels and density of shooting. BM waiting time and weapons shooting time of scenario 1 and scenario 2 Total number of BMs in queue BM Queue length of scenario 1 and scenario 2Total number of BMs in queue Last BM arrived Maximum capacity of system Total number of BMs in queue and last BM that arrived of scenario 1 and scenario 2 Figure 7 : Figure 7: Figures of the performance of the M/M/N/N queueing system. Shooting time for each BM of scenario 1 and scenario 2 Finish time of shooting of scenario 1 and scenario 2 Figure 8 : 20 Mathematical Figure 8: Figures of the performance of the M/M/N/C queueing system. Figure 9 : Figure 9: The queue length as functions of the number of BMs. Table 1 : DE of two-layer defense. Table 3 : DE of four-layer defense. Table 4 : DE of five-layer defense. Table 6 : DE of three-layer defense. indicates that defense weapons of type II are forward-deployed.Suppose that defense weapons are identical in the same layer, and (I) = (II) ; if (I) mean(I) ≤ (II) mean(II) , then one has Table 7 : DE of three-layer defense. Table 8 : Results of operating parameters of the two queueing systems. M/M/N queueing systems.In addition to the queueing model, four simple rules have been developed for use on the complex channel allocation problems.The main aim of this work is to study a stochastic missile defense process close to the Poisson process and to find allocation rules that Table 9 : The results of operating parameters of the adjusted queueing system.
2018-12-10T11:32:31.194Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "9aee2f3e73880e47a0764a86a063eee1b2f0e42e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mpe/2016/5915918.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9aee2f3e73880e47a0764a86a063eee1b2f0e42e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
265657488
pes2o/s2orc
v3-fos-license
Unicentric Castleman disease presenting as a longstanding axillary and chest wall mass: A case report Key clinical message Unicentric Castleman disease, particularly the hypervascular variant subtype, commonly presents as a localized lymphadenopathy without systemic symptoms. Surgical excision is often curative for this subtype, leading to a good prognosis. However, some patients with autoimmune complications may require additional systemic therapy along with surgery. Accurate diagnosis through a combination of clinical, radiological, and pathological findings is crucial for optimal management. Diagnostic workup, including ultrasonography (Figure 2) and MRI [Figure 3], revealed a wellcircumscribed, heterogeneously enhancing mass measuring 10 × 6 × 3 cm.The laboratory investigations revealed that the patient's white blood cell count and red blood cell counts were within normal range.Additionally, the CRP (C-reactive protein) and ESR (erythrocyte sedimentation rate) results were also normal.Notably, the IgG level was measured to be 10.6 g/L.A core needle biopsy was performed, and pathological examination revealed follicular hyperplasia, onion skin-like changes, increased vascularity in the interfollicular areas, and proliferation of fibrous tissue in the capsule (Figure 4).The histopathological features were consistent with the hyaline vascular variant of Castleman disease. 2 After appropriate preoperative preparation, the patient underwent surgical excision of the mass.Intraoperative findings revealed a well-encapsulated mass that was adherent to the chest wall muscles.The mass was completely excised with negative margins.Postoperative recovery was uneventful, and the patient was discharged on postoperative Day 5. Histopathological examination of the excised mass confirmed the diagnosis of Castleman disease.HE staining revealed follicular hyperplasia, onion skin-like changes, increased vascularity in the interfollicular areas, and proliferation of fibrous tissue in the capsule.These findings are consistent with Castleman disease.The diagnosis of unicentric Castleman disease was determined based on the patient's medical history, physical examination, medical diagnostic tests, surgery, and pathological examination. | DISCUSSION Castleman disease is a rare lymphoproliferative disorder that can present with varied clinical features. 3astleman disease is first divided into unicentric and multicentric, and multicentric CD is further subdivided into POEMS-associated, HHV-8+, or HHV-8-(idiopathic multicentric Castleman disease).HHV-8-iMCD is further divided into iMCD-TAFRO and iMCD-NOS.4 The disease has two major histological subtypes: hyaline vascular (or Hypervascular variant) and plasma cell.The more common hypervascular variant subtype presents with localized lymphadenopathy, and this is the histology most commonly associated with unicentric Castleman disease, 5 while the less common plasma cell subtype is associated with systemic symptoms and multiorgan involvement.6 However, in this subset of iMCD-TAFRO patients, there are also individuals who exhibit the hypervascular variant subtype, which is the most aggressive form of CD and has severe systemic symptoms.In this case study, a patient with unicentric Castleman disease was confirmed to have the Hypervascular variant subtype based on pathological examination.The patient underwent complete surgical resection without receiving any additional systemic treatment.Following a one-year postoperative follow-up, no signs of recurrence or complications were observed. Igeneral, for unicentric Castleman disease, surgical resection is considered curative, but some patients with unicentric CD have autoimmune complications such as immunemediated cytopenias and bullous pemphigus, and these patients have a more aggressive course, often requiring systemic therapy in addition to surgery.Treatment options include surgery, radiation therapy, and chemotherapy, depending on the subtype, stage, and extent of the disease.8 The localized hypervascular variant subtype generally has a good prognosis, while the systemic plasma cell subtype has a more variable clinical course.9 F I G U R E 3 Magnetic resonance imaging (MRI) exhibited well-defined lesion borders with a larger cross-sectional size of approximately 99 mm * 53 mm.The T1-weighted images demonstrated isointense signal, while the T2-weighted images showed slightly hyperintense signal.The presence of flow voids consistent with blood vessels was observed within the lesion.Following contrast administration, there was a significant and homogeneous enhancement. F I G U R E 4 HE staining revealed follicular hyperplasia, onion skin-like changes, increased vascularity in the interfollicular areas, and proliferation of fibrous tissue in the capsule.These findings are consistent with Castleman disease. F I G U R E 1 Location of the mass is significantly deeper than the skin and visible to the naked eye.F I G U R E 2 Ultrasound examination revealed an intact tumor capsule, presenting as an irregular hypoechoic mass.| 3 of 4 ZHOU et al.Diagnosis of Castleman disease requires a combination of clinical, radiological, and pathological findings.
2023-12-06T05:05:30.269Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "fe71dd46cf1269d232c219e4c7e4e84807b05f41", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.8258", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe71dd46cf1269d232c219e4c7e4e84807b05f41", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1933987
pes2o/s2orc
v3-fos-license
The effect of intrathecal gabapentin on mechanical and thermal hyperalgesia in neuropathic rats induced by spinal nerve ligation. Gabapentin decreases the level of glutamate and elevates that of alpha-amino-butyric acid in the central nervous system. Gabapentin was shown to have antinociceptive effects in several facilitated pain models. Intrathecal gabapentin was also known to be effective in reducing mechanical allodynia in animals with neuropathic pain. In this study, we investigated to see whether intrathecal gabapentin produces antihyperalgesic effects on thermal and mechanical hyperalgesia in neuropathic rats and whether its effects are associated with motor impairment. To induce neuropathic pain in Sprague-Dawley rats, left L5 and L6 spinal nerves were ligated. After a week, lumbar catheterization into subarachnoid space was performed. Then, paw withdrawal times to thermal stimuli and vocalization thresholds to paw pressure were determined before and up to 2 hr after intrathecal injection of gabapentin. Also, motor functions including performance times on rota-rod were determined. Intrathecal gabapentin attenuated significantly thermal and mechanical hyperalgesia in neuropathic rats, but did not block thermal and mechanical nociception in sham-operated rats. Intrathecal gabapentin of antihyperalgesic doses inhibited motor coordination performance without evident ambulatory dysfunction. This study demonstrates that intrathecal gabapentin is effective against thermal and mechanical hyperalgesia, in spite of moderate impairment of motor coordination. INTRODUCTION Peripheral nerve injury caused by trauma is associated with spontaneous pain, allodynia, and hyperalgesia. These neuropathic pain symptoms are often poorly relieved by conventional analgesics, such as opioids and non-steroidal anti-inflammatory drugs (1,2). Although the mechanisms underlying neuropathic pain have not been fully understood, it is known that excitatory amino acids, including glutamate, play a key role in the alteration of the spinal sensory processing and the plasticity of dorsal horn neurons after nerve injury (3,4). In the search for alternative treatment, anticonvulsants have been found as a pharmacological intervention for patients with neuropathic pain, because the mechanisms of convulsion may be similar to those of neuropathic pain. Gabapentin as an anticonvulsant drug has attracted recent attention because of its effectiveness against neuropathic pain in clinical trials and animal experiments. Although the mechanism of the antinociceptive action of gabapentin remains unclear, it has been demonstrated that gabapentin decreased glutamate concentration and elevated -amino-butyric acid (GABA) concentration in the central nervous system of rat (5)(6)(7). Previous studies have shown that gabapentin produced an antinociceptive effect in the various facilitated pain models (8)(9)(10)(11)(12)(13)(14) and intrathecal gabapentin was effective against allodynia in neuropathic pain (15,16). The antinociceptive effect of systemic administration of gabapentin was observed at doses below those producing its side effects, including behavior or motor dysfunction (17)(18)(19). However, it has been uncertain whether intrathecal gabapentin is effective in mechanical and thermal hyperalgeia in neuropathic pain induced by nerve injury and whether its effect is accompanied with any side effect on motor function. Therefore, we examined whether intrathecal gabapentin produces the antinociceptive effect on thermal and mechanical hyperalgesia in neuropathic rats induced by spinal nerve ligation, and whether the antihyperalgesic effect of gabapentin is affected by motor dysfunction. Animal preparation Male Sprague-Dawley rats weighing 150-200 g were housed in separate cages and allowed to acclimate for 5-7 days by Gabapentin decreases the level of glutamate and elevates that of -amino-butyric acid in the central nervous system. Gabapentin was shown to have antinociceptive effects in several facilitated pain models. Intrathecal gabapentin was also known to be effective in reducing mechanical allodynia in animals with neuropathic pain. In this study, we investigated to see whether intrathecal gabapentin produces antihyperalgesic effects on thermal and mechanical hyperalgesia in neuropathic rats and whether its effects are associated with motor impairment. To induce neuropathic pain in Sprague-Dawley rats, left L5 and L6 spinal nerves were ligated. After a week, lumbar catheterization into subarachnoid space was performed. Then, paw withdrawal times to thermal stimuli and vocalization thresholds to paw pressure were determined before and up to 2 hr after intrathecal injection of gabapentin. Also, motor functions including performance times on rota-rod were determined. Intrathecal gabapentin attenuated significantly thermal and mechanical hyperalgesia in neuropathic rats, but did not block thermal and mechanical nociception in sham-operated rats. Intrathecal gabapentin of antihyperalgesic doses inhibited motor coordination performance without evident ambulatory dysfunction. This study demonstrates that intrathecal gabapentin is effective against thermal and mechanical hyperalgesia, in spite of moderate impairment of motor coordination. using a 12/12 hr day/night cycle. The surgical preparation and the experimental protocol were approved by the Institutional Animal Care and Committee of the Samsung Biomedical Research Institute. Surgical preparation Ligation of the left L5 and L6 spinal nerves in rats was used in this study as an experimental model of neuropathic pain. Rats were anesthetized with 1% halothane in O2 by a mask. The surgical procedure was performed, according to the method described by Kim and Chung (20). A dorsal midline incision was made from L3 to S2. The left L6 transverse process was resected in part to visualize L4, 5 spinal nerves. The left L5 spinal nerve was isolated and ligated tightly with 6-0 black silk just distal to the dorsal root ganglion. The left L6 spinal nerve was isolated below the iliac crest and ligated tightly with 6-0 black silk. After recovery from anesthesia, rats that were unable to withdraw the left hindpaw were excluded from the study. The rats in which the thresholds to thermal and mechanical stimuli after nerve ligation were decreased more than 20% than those before nerve ligation were included in the study. Sham control rats were prepared in the same way, except for nerve ligation. The animals were allowed to recover for 5-7 days before intrathecal cannulation. Intrathecal catheters (PE-10 tube) were inserted into lumbar subarachnoid space during halothane anesthesia, as previously described by St rkson et al. (21). Proper placement of the catheter was determined by the occurrence of hindpaws paralysis after an intrathecal injection of 10 L of 2% lidocaine. All pharmacological experiments were conducted between 2 and 3 weeks after spinal nerve ligation. Each rat received only a single intrathecal injection of drugs. Behavioral testing Thermal response was determined by the left hindpaw withdrawal times using plantar tester (Stoelting Co, Wood Dale, U.S.A.) described by modified method of Hargreaves et al. (22). Rats were allowed to acclimate within plastic enclosures on a clear glass plate maintained at room temperature. A radiant heat source was controlled with a timer and focused onto the plantar surface of hindpaw encompassing the glabrous skin. Paw withdrawal stops both heat source and timer. A maximal cut-off of 30 sec was used to prevent tissue damage. Three trials, at least 10 min apart, were conducted and three withdrawal times were averaged to give a mean withdrawal time. Mechanical response was measured by using Randall-Selitto algesiometer (Ugo Basile, Comerio, Italy), which generates a linearly increasing mechanical force. A mechanical stimulus was applied to the dorsal surface of the left hindpaw by a dome-shaped plastic tip. Mechanical thresholds were defined as the force in grams at which the rat vocalized. A maximal cut-off of 400 g was used to prevent tissue damage. Two trials, at least 10 min apart, were conducted and two vocalization thresholds were averaged. Motor function of hindpaws was evaluated by testing the animals' ability to stand and ambulate in a normal posture. We assessed the motor function by grading the ambulating behavior of rats (16) as the following: 2=normal; 1=limping; 0=paralyzed. Motor coordination was tested using an accelerating rotarod treadmill (Stoelting Co, Wood Dale, U.S.A.). The rotarod was set in motion at a constant speed and the rats were placed into individual sections of the rota-rod. Once the rats were in position, the timers were set to zero and rota-rod was switched to accelerating mode. The rota-rod was operated at a rate of 4 rpm for 20 sec, and then at 8 rpm for 120 sec and at 16 rpm for 60 sec (19). The rats were trained in the test procedure for 5 days before collecting data. The performance times were recorded when the rats were unable to stay on the rota-rod and tripped on the plate. Two trials were performed at intervals of 10 min and performance times were averaged. Experimental protocol On experiment day, rats were acclimated for 30 min before testing. Then, baseline thresholds were determined. Rats were assigned randomly to four groups receiving intrathecal injection of normal saline (n=6) and three different doses of gabapentin (Parke-Davis, Ann Arbor, U.S.A.): 30 g (n=6); 100 g (n=6); 300 g (n=6). These doses were based on the previous and our pilot studies. Drugs were dissolved in normal saline and delivered in a volume of 10 L, followed by a 10 L flush of normal saline, using a gear-driven microinjection syringe. The thermal and mechanical thresholds and the motor function in neuropathic rats and sham-operated rats were determined at 30, 60, and 120 min after treatment. The motor performance on the rota-rod was measured in sham operated rats. The response threshold data were calculated to a percentage of the maximum possible effect (%MPE) according to the following formula: %MPE=[(postdrug threshold-predrug threshold)/(cut-off threshold -predrug threshold)] ×100. Statistical analysis Data are presented as mean±SD. Within each of the treatment groups, effects of drugs on thermal and mechanical hyperalgesia and motor coordination were compared with pre-treatment values by repeated-measures analysis of variance, followed by Dunnett analysis of least significance difference for multiple comparisons. The paw thresholds in response to thermal and mechanical stimuli before and after nerve ligation were compared using paired Student's t-test. A probability level <0.05 was considered to be statistically significant. RESULTS The tight ligation of spinal nerves produced a marked reduction in the thermal and the mechanical stimuli which were necessary to evoke paw withdrawal and vocalization. The paw withdrawal times were reduced significantly from 11.3±1.5 to 7.7±0.6 sec and the vocalization thresholds also from 213.2±32.8 to 126.5±21 g. In spinal nerves-ligated rats, intrathecal gabapentin of 100 and 300 g increased significantly the withdrawal times of the injured paw in response to thermal stimuli (p<0.05) in a dose-dependent manner. These effects of intrathecal gabapentin 300 g were observed at 30 min and reached a maximum at 60 min (Fig. 1). The percentages of the maximum possible effect were 48% and 74% at doses of 100 and 300 g, respectively. In sham-operated rats, however, intrathecal injection of three different doses of gabapentin did not increase the withdrawal times in response to thermal stimuli (Table 1). In spinal nerves-ligated rats, intrathecal gabapentin also increased significantly the vocalization thresholds in response to mechanical stimuli (p<0.05), except for gabapentin 30 g. This effect of intrathecal gabapentin at 300 g was observed up to 120 min, whereas that of gabapentin 100 g was not at 120 min (Fig. 2). The percentages of the maximum possible effect were 31% and 79% at doses of 100 and 300 g, respectively. In sham-operated rats, however, intrathecal gabapentin did not increase the vocalization thresholds in response to mechanical stimuli, except for gabapentin 300 g ( Table 2). In spinal nerves-ligated and sham-operated rats, intrathecal Table 3. Changes of performance time (sec) on rota-rod after intrathecal injection in sham-operated rats gabapentin did not decrease the ambulating behavior scores. However, rats given intrathecal injection of gabapentin 300 g showed the splayed hindpaws. Intrathecal gabapentin decreased significantly the performance times on rota-rod (p<0.05) in sham-operated rats, except for gabapentin 30 g (Table 3). DISCUSSION In the present study, intrathecal administration of gabapentin was effective against thermal and mechanical hyperalgesia in neuropathic pain induced by spinal nerve ligation and its effect was not limited by motor dysfunction. However, antihyperalgesic doses of intrathecal gabapentin inhibited the motor coordination performance without evident ambulatory dysfunction. The antinociceptive effect of intrathecal gabapentin observed in the current study agrees with previous observations in which gabapentin was effective in the various facilitated pain models. It has been shown that gabapentin attenuated the various hypersensitive states induced by injection of formalin (8,9), streptozocin (10), or substance P (14) and reduced paw incision-induced pain (11,12) and burn-induced pain (13). In addition, intrathecal gabapentin decreased mechanical allodynia in neuropathic pain induced by nerve injury (15,16). The efficacy of gabapentin in several pain models suggests the common mechanisms associated with the generation of a facilitated state of processing. Although the mechanism of antinociception of gabapentin has not been known, several studies suggested that the spinal cord is the primary site of drug action (8,9,11,13,15,16). In our study, we found antihyperalgesic effect of intrathecal gabapentin at a higher dose than that producing antiallodynic effect in neuropathic rats. This dose was also higher than that producing antihyperalgesic effect in formalin-induced pain (8,9), postoperative pain (11), and burn-induced pain (13). It may simply reflect a difference in the stimuli strength that intrathecal gabapentin has to overcome to produce an antinociceptive effect in various pain models. It also reflects that intrathecal gabapentin may produce different sensitivity to different kinds of abnormal pain. Previous studies have shown that administration of gabapentin in normal rats did not alter the formalin-induced behaviors during phase 1 period (8,9) and the response to physiologic pain (18). It suggested that antinociception of gabapentin is not analgesic but antihyperalgesic. We observed that intrathecal gabapentin of 300 g in sham-operated rats did not increase the paw withdrawal times to thermal stimuli but increased the vocalization thresholds to mechanical stimuli. This finding is consistent partly with the recent observation that systemic gabapentin increased vocalization thresholds of the noninjured paw to mechanical stimuli in neuropathic rats (17). However, it does not support that rel-atively high doses of intrathecal gabapentin may produce analgesic effect in normal physiologic pain. It is known that the vocalization response to paw pressure is a supraspinally integrated test and the paw withdrawal response to thermal stimuli is a spinally coordinated reflex. As gabapentin is poorly soluble in lipid (23), intrathecal gabapentin may spread cephalad easily and produce the supraspinal effect. Therefore, the vocalization response may be more susceptible than the paw withdrawal response to intrathecal gabapentin. Although the mechanism of action of gabapentin is not clear, there are several evidences that N-methyl-D-aspartate (NMDA) and GABA-mediated events are involved in the pharmacological action of gabapentin. The glycine/NMDA agonist reversed the antihyperalgesic action of gabapentin (19). Furthermore, it blocked the thermal hyperalgesia induced by intrathecal NMDA (14). Gabapentin has been shown to increase GABA synthesis (5) and enhance the GABA release in brain regions (7). Although previous studies have failed to show any affinity for GABAA or GABAB sites or any known site associated with its receptor (15,23,24), gabapentin may play a physiologic role in the modulation of the glutaminergic and GABAergic functions that are involved in the central sensitization of the dorsal horn neurons induced by injury. In our study, although intrathecal gabapentin at 300 g caused the hindpaws to be splayed, it did not obtund the brisk withdrawal response of hindpaw to thermal stimuli in sham-operated rats, nor did it inhibit the ambulating ability. This finding suggests that antihyperalgesic effect of intrathecal gabapentin in neuropathic rats is not limited by motor dysfunction. Previous studies showed that intrathecal administration of gabapentin up to 100 g caused no detectable motor weakness, as judged by placing-stepping reflexes and ambulating behavior or other visible behavioral changes, such as sedation and agitation (11,14,16). The doses of intrathecal gabapentin less than 300 g did not alter motor responses, including paw withdrawal response to pinch (15). Furthermore, several observations pointed out a good separation between the antinociceptive and the side effects of gabapentin (16)(17)(18). Systemic gabapentin was effective in models of neuropathic pain after sciatic nerve constriction (16,17) or formalin-induced pain (18) at doses below those producing behavior or locomotion dysfunction. However, we found that intrathecal gabapentin at the doses inducing antihyperalgesic effect produced motor impairment on rota-rod. It indicates that intrathecal administration of gabapentin has narrower ranges of margin of safety than systemic administration. As water-soluble gabapentin may move from the injected site to the brain through cerebrospinal fluid, the central nervous system that controls locomotion may be more vulnerable to intrathecal gabapentin than to systemic gabapentin. In conclusion, the present study reveals that intrathecal injection of gabapentin is effective against the thermal and mechanical hyperalgesia in neuropathic rats induced by spinal nerve ligation. This result suggests that gabapentin may exert potent effects on anomalous pain states with facilitated spinal processing by tissue or nerve injury. However, intrathecal gabapentin at antihyperalgesic doses also causes the impairment of the motor coordination, which may be considered as one of its side effects. Therefore, it remains to be determined whether intrathecal injection of gabapentin may be a safe intervention for neuropathic pain.
2016-05-12T22:15:10.714Z
2002-04-01T00:00:00.000
{ "year": 2002, "sha1": "c62c4f677c8b4a13d637185ab736050699320acb", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3054857?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c62c4f677c8b4a13d637185ab736050699320acb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59504404
pes2o/s2orc
v3-fos-license
Skyrmion-Like Solitons , Topological Quasi-Positroniums , and Soliton-Catalytic Effects in Graphite-Potassium Intercalation Compounds and Metal Surfaces We have analyzed the narrow components in the positron annihilation angular correlation spectra of graphite-potassium intercalation with the theoretical formula, which is extended from “topological quasi-positronium” model and discusses the relation to the catalytic activity of hydrogens. One mechanism of the soliton-catalytic effect is proposed. Introduction It is known that alkali-metal graphite intercalation compounds (AGIC) have catalytic activities for hydrogen layer of the compounds.The behaviors of these compounds have been known to depend on the structure, sort of metals and temperature.Hydrogens are physisorptively accommodated in the molecules in the interstices among the alkali-metal ions in intercalant layers of stage-2 compounds C 24 M (M = K, Rb and Cs) at temperatures below 200 K [1]- [3].Chemisorption of hydrogen takes place in both first and second stage compounds at higher temperatures [4] [5].In order to clarify the mechanism of hydrogen chemisorption in the graphite compounds and properties of hydrogen-absorbed graphite compounds, several studies have been carried out.The change in magnetic susceptibilities as a function of hydrogen content has been investigated for hydrogen-chemisorbed C 8 K [6].It has been suggested that dissolved hydrogen in C 8 K is paramagnetic by means of magnetic resonance stu-dies [7].The studies of means of ESR and electrical conductivity [8] have shown that in C 8 K, C 24 K, and C 24 Rb, the hydride ions are stabilized after the dissociation of the hydrogen molecules into atoms and the subsequent charge transfer, and in C 8 Rb hydrogens are absorbed in the atomic form. Positron annihilation spectroscopy is a useful method in the study of electronic structures of materials.This spectroscopy has been used in the investigation of electronic structures in graphite and AGICs.The momentum distribution of σ and π-electrons in a graphite crystal has been studied by angular correlation of positron annihilation radiation (ACPAR) [9]- [12], and by Doppler-broadening positron-annihilation radiation (DBPAR) [13]- [15].ACPAR spectra of C 8 K and C 24 K after subtraction of a broad contribution account for annihilation with the graphite σ and π-electrons.Features of the narrow components in ACPAR spectra are in good agreement with the quasi-two dimensional electronic structures, which might correspond to the interlayer state with quasi-two dimensional free electron character parallel to the carbon planes [16].In addition, hydrogen physisorption and chemisorption effects in AGICs have been studied by DBPAR [17]- [20].The DBPAR spectra line shape of C 8 K became sharp through hydrogen chemisorption at 300 K [17] [18], while the spectral line-shape of C 24 Cs became broad through the physisorption of hydrogen molecules at 77 K [18] [19].The intensity of the narrow component of DBPAR spectrum of C 8 Rb was suppressed through hydrogen absorption at 300 K, while it decreased at first and then increased through the absorption at the 373 K [20].From the change in DBPAR spectral line-shape of C 8 Rb, the hydrogen accommodated in C 8 Rb is consider to be atomic at 300 K and to be in the form of the hydride ion at 373 K. Recently, the present authors have discussed the mechanism of anomalous magnetic effect due to hydrogen uptake in C 8 RbH x with the theoretical formula, which is extended from "topological quasi-hydrogen" model.It is suggested that the hydrogen state in C8RbHx might have the Kondo-like property.There exist some problems in the assumption that the narrow component of positron annihilation spectra corresponds to the interlayer state in AGICs.That is, the energy level of the interlayer state in the secondstage AGICs is above Fermi energy E F .Furthermore, it looks like that the narrow component is not attributed to the simple positronium P S in AGICs, because simple P S cannot exist in the metallic state of AGICs. In this study, we analyze the ACPAR spectra in C 24 K [21] with the theoretical formula, which is extended from "topological quasi-positronium" model [22] and discuss the origin of the anisotropic narrow components in ACPAR spectra, and relation to the catalytic activity of hydrogens in AGICs. A Model System and the Soliton-Like Quantum Fluctuation The structure of the stage-2 GIC C 24 K is shown in Figure 1.In the stage-2 GIC C 24 K, the atomic density of potassium metal atoms intercalated between graphite layers is reduced to 2/3 of the density of the close-packed structure in the stage-1 compound, taking into account the difference in composition between the stage-1 compound C 8 K and the stage-2 compound C 24 K.According to the electronic structure model [16], there coexist quasi-two dimensional graphite π-bands, bands originating from potassium-metal s electrons, and the electronic interlayer state in the stage-2 GIC C 24 K, as shown in Figure 2(a).The Fermi energy E F and the location of the bands are determined by a balance between electronic and lattice energies.The electronic interlayer states were introduced by Posternak et al. [16].These interlayer states, which exhibit free-electron character parallel to the layers, form a quasi-two dimensional band close to the Fermi energy.Now, we shall consider the quantum fluctuation for investigation of quasi (2 + 1) electron state's system.In this study, we propose a kind of quasiparticle "the topological quasi-positronium", based on the famous quantum fluctuation mechanism by Polyakov [23].Polyakov [23] explained relatively strong quantum fluctuation in (2 + 1) system as follows.That is, the hedgehog solution [24] [25] in (3 + 1) system induces instanton-like quantum fluctuation in (2 + 1) system.Now, we introduce the Lagrangian density, ( ) where ψ is the positron field and Φ is the scalar field.The first and second terms in Equation (1) shows the kinetic energy of positron field and the interaction between the positron field and gauge field.The third term corresponds to the kinetic energy of the gauge field.The fourth term shows the kinetic energy of the scalar field and the interaction between the scalar field and gauge field.The fifth term shows the effective potential. Here, we set the symmetry breaking ( ) Then we enter Equation (2) into Equation ( 1) and can introduce the effective Lagrangian density as follows. When the determinant of the infinitesimal transformation operator in this gauge condition is expressed by det M, the generating function of Green function is where , , J k η and k are the sources of the field , , , a A µ φ ψ and ψ .Taking into account that n φ is a kind of the vacuum state 3 φ corresponds to the soliton-like-field introduced by t'Hooft [26], based on the disorder parameter [27] [28].A charged skyrmion-like soliton in quasi (2 + 1) intercalate state in AGICs might be described as a topological soliton [22] [29] [30]- [32] "Topological quasi-positronium" is regarded as the skyrmion-like soliton, which traps a positron, of the electron-density."Topological quasi-hydrogen" is regarded as the skyrmion-like soliton, which traps a proton, of the electron-density.Fermion number densities ( ) ( ) H sky J r of "topological quasi-positronium" and "topological quasi-hydrogen" are represented, respectively, in the case of the SU(2) pseudospin-like skyrmion configuration [30]- [32]. ( ) is to be thought to be the induced soliton-like electron density in quasi (2 + 1) system.Really the quasi(2 + 1) system is derived from the instanton-like fluctuation of these solitons [33].Here we shall introduce one mechanism of the soliton-catalytic effect as follows.The effective lagrangian for non-abelian gauge fields A μ in the presence of a soliton 3 φ is ( ) ( ) where the x 3 is parallel to c-axis and the plane (x 1 , x 2 ) is perpendicular to c-axis.The corresponding variation of L eff under an infinitesimal gauge transformation is ( Conclusion We have analyzed the narrow component in the positron annihilation angular correlation spectra in the second stage graphite-potassium intercalation C 24 K with the theoretical formula extended from "the topological quasipositronium" model and have discussed the relation to the catalytic activity of hydrogens in AGICs. Figure 1 . Figure 1.The structure of the stage-2 GIC C 24 K.The carbon planes and potassium planes have overlapped by turns. Figure 2 . Figure 2. (a), (b), (c) The electronic structure model of (a) alkali-metal GICs and we consider that (b) "topological-quasi-positronium" exists from the result of ACPAR, and (c) absorbed hydrogen alkali-metal GICs.4s denotes the alkali-metal s band.π and π * denote graphitic bands.The electronic structure changes by absorbing hydrogen. Φ with a zero asymptotics at infinity. Figure 3 (a) shows the positron annihilation spectrum for pair-momentum p perpendicular to the crystallographic c-axis (P ⊥ c) in C 24 K [21].Assuming p-distribution of the narrow component (p ⊥ c) of the positron annihilation spectrum is isotropic, p-distribution of the narrow component (p ⊥ c) is shown in Figure 3(b). Figure 3 ( c) shows the autocorrelation function of the electron-wave function perpendicular to the c-axis, which is fourier-transformed from the narrow component for pair-momentum p perpendicular to the c-axis.The solid line in Figure 3(c) shows the function ( ) P sky J r .The value of ξ P is estimated to be ~ 2.1 Å.The dotted line in Figure 3(c) shows the function ( ) P sky J r with ξ H ~ 1.0 Å, approximately. Figure 3 . Figure 3. (a) The positron annihilation spectrum for pair-momentum p perpendicular to the c-axis in C 24 K [21].(b) p-distribution of the narrow component perpendicular to the c-axis.(c) The autocorrelation function of the electron-wavefunction perpendicular to the c-axis.The solid line in Figure 3(c) shows the function ( ) P sky J r with ξ P ~ 2.1 Å.The dotted ∫ This shows parity anomaly.That is, the soliton-like electron density in the interlayer state induces parity anomaly quantizely.It is suggested that the soliton-like electron density in the interlayer state might induce the parity change, from the bonding electron wave function to the anti-bonding one, in the hydrogen-molecule in the interlayer in AGICs to cancel the parity anomaly.
2018-12-25T03:03:26.973Z
2014-09-15T00:00:00.000
{ "year": 2014, "sha1": "7e4097d06c930a4a42133d033f815ed6f8d63ff8", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=50145", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7e4097d06c930a4a42133d033f815ed6f8d63ff8", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
225047030
pes2o/s2orc
v3-fos-license
Deep time-delay Markov network for prediction and modeling the stress and emotions state transition To recognize stress and emotion, most of the existing methods only observe and analyze speech patterns from present-time features. However, an emotion (especially for stress) can change because it was triggered by an event while speaking. To address this issue, we propose a novel method for predicting stress and emotions by analyzing prior emotional states. We named this method the deep time-delay Markov network (DTMN). Structurally, the proposed DTMN contains a hidden Markov model (HMM) and a time-delay neural network (TDNN). We evaluated the effectiveness of the proposed DTMN by comparing it with several state transition methods in predicting an emotional state from time-series (sequences) speech data of the SUSAS dataset. The experimental results show that the proposed DTMN can accurately predict present emotional states by outperforming the baseline systems in terms of the prediction error rate (PER). We then modeled the emotional state transition using a finite Markov chain based on the prediction result. We also conducted an ablation experiment to observe the effect of different HMM values and TDNN parameters on the prediction result and the computational training time of the proposed DTMN. Related works In this decade, stress and emotion recognition systems using speech analysis have been extremely studied. Most of them used a standard architecture where the feature extraction and classifier were the main components in recognizing the stress and emotion patterns. The effectiveness of feature representation is a crucial modality to make the system efficient. The fundamental frequency, energy, formats, mel-frequency cepstral coefficients (MFCC), and the Teager energy operator (TEO) are typical techniques used to capture stress and emotion features 34 . The identity vector (i-vector) and DNN embedding vector (x-vector) that have success in recognizing the speaker 35,36 and language 37,38 have also recently proven robust in representing the stress 13 and emotion features 39 . A single classifier, such as support vector machines (SVMs) 40,41 , neural networks and their variations 12,34 , the k-nearest neighbor (KNN), Gaussian mixtures model (GMM) 42 and HMM 43 , is commonly used to discriminate the types of stress and emotions. To enhance the performance of single classifiers, hybrid classifiers such as SVM/ GMM 44 or ensemble models 11 have been proposed. An amount of stress and emotion dataset (e.g., Speech Under Simulated and Actual Stress (SUSAS) 45,46 , Emotional Database (EmoDB) 47 , Keio University Japanese Emotional Speech Database (KeioESD) 48 , Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) 49 ) has been provided. However, we know that stress has diverse characteristics and different patterns for each individual. It is caused by various aspects, such as characteristics, gender, experience background, and emotional tendencies 50 . Considering these rules, to make the system more robust and able to adapt in real conditions, more data training is required. Unfortunately, stress and emotion data are difficult to collect on a large scale. To address this issue, some studies have explored an unsupervised approach for categorizing stress and emotion speech data based on the similarity of their characteristics. An unsupervised algorithm defines their effective objective in a self-learning manner [15][16][17][18]51,52 . Typically, an unsupervised clustering algorithm uses a similarity algorithm to compute the distance between data points in feature space 17,51,52 . However, calculating the distance for all data points on high-dimensional data is inefficient and known as the curse of dimensionality issue. In the past year, some researchers have offered another approach for solving the problem of the curse of dimensionality by presenting a compact feature representation in the clustering assignment, known as deep clustering 53 . Deep clustering uses a DNN-based autoencoder to transform input into a low-dimensional feature representation and simultaneously learn the clustering assignment 20 . With this ability, deep clustering has become a popular clustering method and is widely used in many practical applications. Technically, deep clustering strengthens the feature representation by pushing the inter-cluster compactness. However, it accidentally ignores the effect of inter-cluster similarity. The unsupervised deep time-delay embedded clustering (DTEC) 21 offers discriminative loss supervision to address this issue. DTEC has proven more effective in categorizing stress and emotions. Since DTEC is unsupervised learning, the correspondence between the output class and informational classes cannot be confirmed yet because there was no given measured information about the relationship between Scientific Reports | (2020) 10:18071 | https://doi.org/10.1038/s41598-020-75155-w www.nature.com/scientificreports/ observed clusters. By incorporating prior knowledge, a semi-supervised DTEC framework (SDTEC) 22 is proven to provide information for guiding the clustering assignment. In some cases, emotion (e.g., stress) may change when triggered by an event while speaking 23 . Thus, we argue that the exploration of emotional state transition becomes a crucial consideration to recognize emotion accurately. Several studies explicitly modelled the speaker's emotion by its state transition using KNN 23 , the long short-term memory (LSTM) 24 , Bayesian network 25 , finite state machine (FSM) 26 , and the Markov model 27 . Due to its ability to provide excellent representation for time-series (sequences) data 54,55 with temporal variations 56 , the HMM is widely used to model the emotion state transition. A Markov model assumes that only the dependencies between consecutive hidden states are modeled so that there are local dependencies and limits for capturing a long-term temporal. To address this, the deep Markov neural network (DMNN) is proposed to learn in-depth the hidden representation of HMM using a recursive neural network 30 . In this paper, the stress and emotion prediction model is proposed by considering its state transition. The proposed DTMN can learn in-depth the hidden representation of HMM using a fixed-dimension size of convolution networks (known as the time-delay neural network or TDNN). Different from DMNN that uses the recursive neural network to connect the previous time step of its hidden states, the proposed DTMN uses TDNN to model the relation between hidden states and the observations by receiving as input the activation patterns over time from units below. In addition, we apply a softmax function in the last layer to define the probability of each class. We evaluate the effectiveness of the DTMN to predict the stress and emotion state from the speech data of SUSAS 45,46 and compare it with state-of-the-art state transition models, such as KNN 23 , LSTM 24 , the Bayesian network (BN) 25 , HMM 54 , and DMNN 30 . For further evaluation, we conducted an ablation experiment to investigate the effect of HMM and TDNN parameters on the prediction result. Results We demonstrate the effectiveness of the proposed DTMN to predict the present state of stress and emotion and then model their state transition. The proposed DTMN is assigned to predict the state of stress and emotion from the speech data from the SUSAS dataset. The performance of DTMN is evaluated by comparing it with the baseline systems in terms of the prediction error rate (PER). Furthermore, we model the state transition of stress and emotions based on the speech label from the prediction result. Prediction accuracy. The effectiveness of the proposed DTMN is evaluated in predicting the emotional state of the time-series observations. In this experiment, we set the input and the parameters of DTMN as mentioned in the "DTMN parameters setting" section and the "Baseline systems setting" section, respectively. We run each system independently 10 times, and on average, the evaluation results are summarized in Table 1. Table 1 shows that BN presents a lower error than KNN. This is because KNN should provide proper scaling among variable time steps, while BN depicts the relationships between variables on each time step in the manner of conditional independencies. However, BN cannot represent the nonlinear functions of state variables. Hence, BN has a higher error rate than HMM. The performance gap between LSTM and HMM shows that in-depth learning of the hidden state is more effective than statistical machine learning. Although the LSTM has learned the long-term temporal context dependencies, many emotional states are hard to determine or even unobservable. The combination between HMM and DNN (such as DMNN and the proposed DTMN) presents a better ability in solving the LSTM's limitations by demonstrating a lower error rate. By considering the activation patterns over time, the proposed DTMN significantly outperforms the DMNN in predicting the emotional state. The proposed DTMN is a sophisticated emotional state transition model that achieves an average prediction error rate of 8.55%. Emotional states transition. In the "Prediction accuracy" section, the proposed DTMN demonstrates an effective result in predicting the stress and emotion by its state transition. This indicates that the proposed DTMN can accurately predict the present state based on the prior states. Furthermore, we use a finite Markov chain to model the pattern of emotion transitions. Since males and females express emotion in different ways 57 , we present the state transition of males and females in the different diagrams. Figure 1 shows the emotional state transition model. Tables (a) and (b) denote the state transition probability for males and females. P i,j indicates the transition probability from state i to states j. For instance, P 1,5 is the state transition probability from the state "angry" to state "soft" with the probability "0.02" for males and "0.26" for www.nature.com/scientificreports/ females. Each table shows that the sum of each row is one. As an example, the first row of Table (a) represents that sum of the transition probability from the state "angry" to the other states (angry, high stress, low stress, neutral, and soft) is one. This indicates that the transition matrix is a stochastic process, i.e., j P(i, j) = 1 . From Tables (a) and (b), it is clear that the highest probabilities of each row and column are diagonal. This indicates that emotions typically do not change in a short time. The current emotional state will be retained if there are no typical effective stimuli. However, the highest sum of each column is "neutral" for males and "soft" for females. This proves that females are more emotional than males. Another surprise is that females are more likely to be "soft", while males are more likely to angry after stressful conditions, which indicates that gender responds to emotional stress in different reactions, both psychologically and biologically, depending on their background experience, behavioral, and physiological domains. Discussion In this paper, we present a novel framework of stress and emotion prediction and modeling. Structurally, the DTMN consists of a HMM and the TDNN. The HMM is trained to produce the transition probabilities and the hidden states at each time step. TDNN can learn in-depth the hidden representation of HMM by creating more extensive networks from sub-components. In the prediction task, the DTMN is assigned to predict the emotional state of the time-series observations. As shown in Table 1, DTMN can outperform the baseline systems by achieving the lowest prediction error rate. This result indicates that the proposed DTMN overcomes the challenge by predicting the change in emotion accurately while speaking. Moreover, we showed that our method is efficient and effective in predicting stress and emotion. www.nature.com/scientificreports/ As mentioned above, emotion can usefully be defined as states elicited by reinforcements. These reinforcements or stimuli can be considered emotional information. As we know, every person can recognize and understand other emotions without any training, and it is too complex to be described by machine learning. Therefore, we argue that there are common patterns of emotional events. In this work, we presume that the cognitive assessments to basic emotional stimuli are the same. Then, we use the five discrete emotional states (high stress, low stress, neutral, soft, and angry) from the SUSAS database and the movements of emotional states taken by the Markov process, as shown in Fig. 1. We represent males and females in different schemes because they express emotion in different ways. Generally, males and females present a similar emotional transition representation. However, there are some fundamental differences between male and female emotional transition tendencies. Females tend to more easily change their emotions, but they have a tendency to longer stress than males. After a stressful period, females tend to become "soft", while males more easily become "angry". Method The proposed DTMN structurally consists of a Markov model that is denoted by the HMM and a neural network that is represented by the TDNN. Figure 2 shows the framework for predicting and the stress and emotions using the proposed DTMN that is performed in three phases: the training phase, the prediction phase, and the emotional states transition modeling phase. We perform a series of training procedures to obtain estimated parameters of DTMN. The HMM is trained using the time-series observation to produce the transition probabilities and the hidden states at each time step. Then, the TDNN is trained to predict the present hidden states using as input the present speech features and the prior hidden state. After the training phase, we obtain the estimated parameters of HMM and TDNN. In the prediction phase, the trained DTMN is used to predict the emotional state label of the unlabeled observations. We conduct an opposite procedure with the training phase. First, the TDNN model predicts the present hidden states using the present speech features as input. Then, the HMM model predicts the emotional state label of the unlabeled observations using the predicted hidden states. In the emotional states transition modeling phase, we model the transition pattern of emotions using the Markov chain with the predicted emotional states as input. This phase aims to illustrate the pattern of emotional Markov chain whose internal state cannot be observed directly but only through some probabilistic function. In other words, the internal state of the model alone determines the probability distribution of the observed variables. This unobservable state is known as the hidden state. The advantage of the hidden states does not need to emphasize discretization and normalization issues so that we can deal with an arbitrary observation. In addition, the random noise in the observation can be handled by the hidden states. Therefore, the proposed DTMN uses the representation of the hidden states for connecting between observations. For instance, given an observation f t and a state label y t , where t = 1, 2 . . . , T . As shown in Fig. 3, f t and y t are the speech feature and the item that we want to predict at time t. By giving tuples (f t , y t ) , a classification model is used to predict y t . We present a hidden state variable q t on each time step to connect the observation f t and the label y t . The parameter learning task in HMM is to find the best set of state transitions and emission probabilities. We establish the relationship between the hidden state and the labels as follows: where i, j = {1 . . . N} . Each a ij represents the probability of transition from state i to state j, and each e ij expresses the probability of y t being generated from state j. Time-delay neural network. We use convolution networks with a fixed-dimension size (known as the timedelay neural network or TDNN) to predict the present hidden states. TDNN is a multilayer artificial neural network architecture that uses modular and incremental design to create more extensive networks from subcomponents. It makes TDNN effective in learning the temporal dynamics of the signal even for short-term feature representation 31 . Unlike a standard DNN, in processing a wider temporal context, the first layer of TDNN learns the context in a narrow temporal window and continues to a deeper layer. Distinctively, TDNN receives input not only from the hidden state representation at the below layer but also from the activation pattern of the unit output and its context. In this paper, TDNN is used to model the relation between the hidden states and the observations by applying the relation of the hidden state and the labels (Eq. 1). Specifically, TDNN predicts the present hidden state q t by www.nature.com/scientificreports/ taking as input the prior hidden states q t−1...N and the present features f t . The structure of the TDNN is shown in Fig. 4, and each layer function is summarized in Table 2. As shown in Fig. 4 and Table 2, we designed a TDNN with five layers. Layer-1 holds full temporal contexts of prior hidden states from q t−5 to q t−1 that splices together frames [0, −2] . In Layer-2, we apply the sub-sampling technique (locally connected) 32 so that only two temporal contexts ( q t−3 and q t−1 ) are held. Then, we concatenate the present speech features f t and q t−1 feature from the second layer in Layer-3. A fully connected and softmax layer are performed in Layer-4 and Layer-5 of the TDNN, respectively. A softmax function is used to define the probability by taking a C-dimensional vector Z (from Layer-4) as input and outputs C-dimensional vector τ (real values between 0 and 1). The normalized exponential of the softmax function is expressed as follows: w q and w f are the coefficients to be estimated. α and β are the functions that are used to transform q t−1 and f t into feature vectors. We perform a binary approach to α(q t−1 ) by assuming that the coordinates of q t−1 th = 1 and the others are zero. The denominator C d=1 e z d is a regularizer that aims to ensure C c=1 τ = 1. Training phase. In the training phase, DTMN is trained to obtain the estimated parameters of HMM and TDNN. We perform the training phase in two steps. As shown in Fig. 2, the first step is to estimate the hidden state q t based on the labels y t using the Baum-Welch algorithm, and the transition matrix A and emission matrix E are estimated. After q t is estimated, the second step is to estimate the parameter of the TDNN. We use the structure of the TDNN (Fig. 4) in the task of supervised prediction. The TDNN is trained to predict the hidden state q t on each time step. Iteratively, we estimate the TDNN's parameters ( w q , w f , and β ) by minimizing the log-likelihood using stochastic gradient descent (SGD). Prediction phase. After the training phase, we obtain the estimated parameters of HMM (A and E) and TDNN's parameters ( w q , w f , and β ). These estimated parameters are used to build the DTMN model. In the prediction phase, we perform an opposite procedure with the training phase. The DTMN model is used to predict the label y t of the unlabeled observations using the present feature f t and prior hidden state q t−1 . By Eq. 2, we use f 1 to predict q t , and then q 1 and f 2 are used to predict q 2 . Next, to predict q 3 , we used (q 2 , f 3 ) . This procedure continues until Q = {q t,(t=1,2...,T) } are reached. Since each q t is a random variable and P(q t |f ) is 1-by-1 from t = 1 to t = N , the probability distribution of the labels y t that gives the prediction for the label is as follows: Emotional states transition modeling phase. A study 58 defined emotions as discrete patterns of systemic activity. Emotions are categorized clearly and consistently across multiple levels of analysis, such as subjective experiences, physiological activity, and neural activation patterns. It supports that emotions are discrete systems that are organized in a distributed fashion across the brain. A discrete system is characterized by a set of states and transitions between the states. To formally describe a discrete event simulation, many works use a stochastic process algebra 59,60 . In a discrete system, it can describe the passing of time and probabilistic choice between a limited number of processes, called the discrete stochastic process. Here, the universal quantifier is limited to feasible sequences of states to sequences that occur with positive probability. In other words, it is defined as a discrete stochastic process with a finite number of states. Since emotions are discrete system activity 58 , we apply the finite Markov chain to model the state transitions of emotion. A finite set of states is high stress, low stress, neutral, soft, and angry. The emotional state updates its state depending on its current features and the prior states as input. In this emotional state transition modeling phase, the state transition matrix P is represented by an n × n square Markov matrix in which each element is non-negative, and the sum of each row of P is one. Each row of P denotes a probability mass function for all n possible states. Given a finite set of state space S with n state value elements x 1 , . . . , x n . A Markov chain X t is a sequence of random variables on S that have the Markov property. This means that for any time step t and any state y ∈ S, It indicates that probabilities for future states are known by just knowing the current state. Specifically, the set of values fully determines the dynamics of a Markov chain. where (x, y) ∈ S . With regard to P(x, y) being the transition probability from x to y in one step (time) and P(x.) being the conditional distribution of X t+1 given X t = x , P is obviously a stochastic matrix where: Dataset. We used the stress speech data from the Speech Under Simulated and Actual Stress (SUSAS) databases that were collected by the Linguistic Data Consortium (LDC) 45 . The SUSAS database is divided into four domains of various stresses and emotions that were obtained from 32 speakers (13 women, 19 men) 46 . More than 16,000 utterances are provided in labeled and unlabeled data. SUSAS labels the speech data into five stress and emotion states: neutral, medium stress, high stress, soft, and angry. We used two labeled conversations data for estimating the two sets of parameters (HMM and TDNN). For evaluation, we used the six unlabeled conversations that have various speech durations. We conditioned the speech input using their activity 62 , speakers 63 , and gender 64 . Then, each speech is represented in a low-dimensional embedding space using the SDTEC algorithm 22 . DTMN parameters setting. In the HMM model, we set the number of hidden states to 80 30 , and the matrix of state transition and the initial state distribution are initialized randomly between 0 and 1. Gaussian distributions are used to determine the emission probabilities. In the TDNN model, we perform batch normalization with a 256 batch size to stabilize the training procedure 30 . The rectified linear unit (ReLU) activation function is used on each hidden layer that has a dimension of 4000. Baseline systems setting. The effectiveness of the proposed DTMN is evaluated to predict the stress and emotion state from the speech data of the SUSAS. We then compare it with five state-of-the-art state transition models, as follows: run KNN with all parameter settings and architecture the same as 23 BN: run the BN with all parameter settings and architecture as in 25 HMM: run the HMM method with the same settings and architecture in 54 LSTM: run the LSTM network with all parameter settings and architecture same as 24 DMNN: run the DMNN with same setting and architecture in 30 We use embedding feature representation from SDTEC (Section "Dataset") as input to all systems (baseline and proposed system). Ablation experiments. The ablation experiment is a method used to investigate the abilities of the system's representations. It is especially helpful for observing the robustness of the system in an extensive work area 65 . The ablation experiment is an essential factor for safety-critical applications. Thus, to investigate the effectiveness of the proposed DTMN in more advanced applications, we conducted an ablation experiment. This experiment observes the effect of different values of the HMM and TDNN parameters on the prediction result. In particular, We estimate the hidden states q t based on the labels y t using the Baum-Welch algorithm. Additionally, the estimated state transition matrix A and emission matrix E are obtained, as expressed in Eq. (1). Specifically, the Baum-Welch algorithm uses the expectation-maximization (EM) algorithm to find the maximum likelihood estimate of the parameters of the hidden Markov model (HMM) given a set of observed feature vectors. The maximum likelihood approach can produce an HMM that significantly overfits the limit and consequently exaggerates the number of hidden states present in the signal. Hence, we argue that a correct selection of the number of hidden states in the HMM context is a crucial problem that should be observed. In this experiment, we run the HMM model by setting a different number of hidden states (5-100). Figure 5 shows the prediction error rate in different numbers of hidden states. It shows that the increase in the number of hidden states reduces the prediction error rate significantly. The lowest error rate is achieved when the number of hidden states is 80. Because each process in the TDNN architecture is bound to the time steps, they look like the convolutional network. An accumulated gradient updates the lower-layer hyperparameters across input time steps. TDNN computes the activation of the time steps at each layer and the dependencies across layers. Hence, a correct temporal contextual input determines the effectiveness of the TDNN architecture. Thus, in this section, we investigate the effectiveness of the TDNN with various temporal contexts on the prediction result. We set each neural network to have 4000-dimensional input. The investigation of the various temporal contexts is conducted on the first two layers of the TDNN architecture (Layer-1 and Layer-2), see Fig. 4. TDNN predicts the present hidden state by using as input a set of the prior hidden states q t−1...T from the HMM. The prediction error rate of the TDNN with various temporal context inputs is demonstrated in Table 3. TDNN-1 presents the highest error prediction compared to the other models. This indicates that multi-temporal context input is better for predicting present emotional state than a single temporal context. Furthermore, the increase in the number of temporal contexts (TDNN-2 and TDNN-3) can decrease the prediction error rate significantly. TDNN-4, which uses [−1, −5] as input, is the optimal temporal context for predicting the emotional state. It achieves 8.31% PER. The proposed DTMN models the temporal dynamics by capturing the long-term dependencies between states. Hence, it requires an acoustic model that can effectively deal with long temporal contexts. In the "Prediction accuracy" section, the effectiveness in modeling the temporal dynamics of the DTMN is evaluated in terms of www.nature.com/scientificreports/ the prediction error rate (PER). The accuracy of the prediction result is essential, but in practice (implementation phase), the time complexity of the model should also be considered. Training involves finding a specific set of weights based on training examples, which yields a predictor that has excellent performance. Thus, training time is the main challenge in developing a model. Existing theoretical results show that a model that is computationally difficult is the worst model 66 . Hence, in this ablation experiment, we observe the training time of the proposed DTMN, presented in Fig. 6. We demonstrate the computational training time of the proposed DTMN compared to the baseline DMNN in different numbers of training samples (from 500 to 8,000). In this experiment, we train the systems on a computer with specifications, as mentioned in the "Experiments" section. Figure 6 shows that DTMN presents a lower computational training time than DMNN (1,433 seconds for DTMN and 8,952 seconds for DMNN in 8,000 training samples). As mentioned before, DTMN uses TDNN to model the relation between hidden states and observations. TDNN operates at a different temporal resolution, which increases on higher layers of the network. The transforms in the TDNN are tied across time steps, and for this reason, the lower layers of the network can learn invariant feature transforms effectively. Moreover, as shown in Fig. 4, we applied the sub-sampled technique. This technique makes the computations of the time step activations more efficient than standard DNN. Conclusion In this paper, we proposed a new framework for predicting and modeling stress and emotions, named the deep time-delay Markov network (DTMN). DTMN predicted the state of stress and emotions by considering its state transition. Structurally, the proposed DTMN consisted of a hidden Markov model (HMM) and the time-delay neural network or TDNN. HMM was used to predict the hidden states at each time step, while the neural network was applied to learn in-depth the hidden representation of HMM. The TDNN predicts the present hidden state using as input the prior hidden states and the features of the present time. We explicitly used a compact feature representation of stress and emotion (embedding features) of SDTEC as the input of DTMN. The effectiveness of the proposed DTMN was evaluated by comparing it with some state transition models, such as KNN, LSTM, the Bayesian network, HMM, and DMNN, in the task of predicting the emotional state from the time-series data of the SUSAS dataset. Based on the evaluation result, the proposed DTMN outperformed the baseline state transition systems by achieving a prediction error rate (PER) of 8.55%. In further analysis, we conducted a comprehensive ablation experiment to investigate whether the estimated parameters of HMM and TDNN are related to model performance. In particular, we investigated a different number of hidden states in the HMM and the various temporal contexts in the TDNN parameters to the prediction result and the computational training time of the proposed DTMN. The experimental results showed that the lowest error rate was achieved for the number of hidden states by 80, the temporal context of TDNN is [t − 1, t − 5] , and the computational training time of the DTMN is 1,400 seconds for 8,000 training samples. Furthermore, we performed a finite Markov chain to model the state transition of stress and emotions. Based on the emotional state transition model, females have a trend in longer stress conditions than males. After a stressful period, females have a probability to be more easily soft, while males tend more easily to anger. In general, females are more emotional than males. Non-intrusive measurement methods (such as facial or speech) are not as effective as non-invasive methods (such as EEG and ECG). However, based on the experimental results, the proposed method presented a low error rate in recognizing stress and emotions. In other words, the proposed system demonstrates great promise to be leveraged in real life. Therefore, in the future, we will implement a smart-phone application-based proposed system as an early detection system of emotion.
2020-10-24T05:06:04.358Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "e410288c928ac50edcf14717125f255b3029e5f6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-75155-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e410288c928ac50edcf14717125f255b3029e5f6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
16553144
pes2o/s2orc
v3-fos-license
Clinical Study and Review of Articles (Korean) about Retrorectal Developmental Cysts in Adults Purpose A retrorectal developmental cyst (tailgut cyst, epidermoid cyst, dermoid cyst, teratoma, and duplication) is very rare disease, and the symptoms are not characteristic so that sometimes this disease is still misdiagnosed as a supralevator abscess or a complex anal fistula. We would like to present a clinical approach to this disease. Methods We retrospectively examined the charts of 15 patients who were treated for retrorectal cysts from January 2001 to November 2009. Results All 15 patients were female. The average age was 41 years (range, 21 to 60 years). Fourteen patients (93.3%) were symptomatic, and the most common symptom was anal pain or discomfort. Nine patients (60%) had more than one previous operation (range, 1 to 9 times) for a supralevator abscess, an anal fistula, etc. In 12 patients (80%), the diagnosis could be made by using the medical history and physical examination. Thirteen cysts (80%) were excised completely through the posterior approach. The average diameter of the cysts was 4.8 cm (range, 2 to 10 cm). Pathologic diagnoses were 8 tailgut cysts (53.3%), 5 epidermoid cysts (33.3%) and 2 dermoid cysts (13.3%). The average follow-up period was 18.3 months (range, 1 to 64 months). Conclusion In our experience, high suspicion and physical examination are the most important diagnostic methods. If a female patient has a history of multiple perianal operations, a retrorectal bulging soft mass, a posterior anal dimple, and no conventional creamy foul odorous pus in drainage, the possibility of a retrorectal developmental cyst must be considered. INTRODUCTION Among presacal or retrorectal developmental cysts are tailgut cysts, epidermoid cysts, dermoid cysts, teratomas and rectal intussusceptions [1][2][3][4][5]. (Some authors do not include teratomas and rectal intussusceptions as developmental cyst [2].) These are extremely rare types of diseases and are asymptomatic in 26-50% of the patients [1,3,6]. Even when a patient presents with symptoms, un- The authors examined the clinical progress of 15 adult patients with retrorectal developmental cysts and reviewed the literature. This study aimed to provide a proper clinical approach to the diagnosis and the treatment of retrorectal developmental cysts. METHODS The subjects were 15 patients who were treated for retrorectal developmental cysts in the colorectal surgery clinic in this hospital between January 2001 and November 2009. A retrospective medical record review was performed and we considered sex, age, surgical history for the same lesion, presented symptoms at the time of the diagnosis, signs that aided the diagnosis and surgical methods after the diagnosis. Of the 15 patients, 9 had undergone one or more operations (1-9 times) prior to the diagnosis in this hospital, and 5 of the 9 (55.6%) had operations after having been diagnosed with a supralevator abscess, a suprasphincteric fistula in Park's classification (IV type in the Sumikosi classification). Three other patients were diagnosed, respectively, with an intersphincteric fistula (IIH+IIL type in Sumikosi classification, 11.1%), an unspecified fistula (11.1%), and pilonidal disease or perianal abscess (11.1%). The last patient (11.1%) was diagnosed with a retrorectal cyst at another hospital, but it was a recurrent case (11th patient in Table 1). In 2 of the 15 patients (13.3%) (4th and 8th patients in Table 1), the diagnosis of a retrorectal cyst was made during the operation for a fistula and a perianal abscess and in 1 case (6.7%) (3rd patient in Table 1), the diagnosis was made accidently on a CT scan during health screening. Apart from these, the diagnoses of the other 12 cases (80%) were made by history taking and physical assessment and were confirmed by CT or magnetic resonance imaging (MRI). In 3 cases (20%), including 2 cases diagnosed during the operation, even though the patients had undergone MRI prior to the diagnosis, they were initially misdiagnosed as having a complicated fistula and a supralevator abscess, but the diagnoses were later corrected (4th, 6th, and 8th patients in Table 1). A soft prominence was palpable in 13 patients (86.7%) during the physical examination and 10 patients (66.7%) complained of tenderness in the retrorectal area. The existence of a posterior anal funnel-shaped dimple (Fovea coccygea) ( Fig. 1) was investigated in 7 patients and was confirmed in 3 patients (42.9%), but not in the other 4 (57.1%). We were unable to confirm the existence or absence of a posterior anal funnel-shaped dimple in the other 8 cases. One of the patients (12th patient in Table 1) (6.7%) presented with a mild fever of around 37.5°C and increased leukocytes of 11,000; the other 14 patients (93.3%) were apyrexial and had normal levels of leukocytes. A posterior approach was used in all surgeries. A coccygectomy was done in 7 cases (46.7%), coccyx was not removed in the other 8 cases (53.3%) ,and intersphincter approach was used in 3 (20%) among this 8 cases. A total excision was done in 13 cases (86.7%); the 2 cases (13.3%) (1st and 4th patients in Table 1), which had fecal incontinence prior to the surgery and a high risk of worsening fecal incontinence, did not undergo a total excision. The maximum diameter of the cyst was 4.8 cm on average (range, 2 to 10 cm). Histological investigation revealed 8 tailgut cysts (53.3%), 5 epidermal cysts (33.3%) and 2 dermoid cysts (13.3%); there were no malignancies. The mean follow-up period was 18.3 months (range, 1 to 64 months), and wound infection occurred in 1 case (12th patient in Table 1), but was resolved with conservative management and did not recur. DISCUSSION Retrorectal or presacral tumors are extremely rare diseases. Whittaker and Pemberton reported 22 cases in the Mayo clinic during the 15 years period between 1922 and 1936, and Jao et al. reported 120 cases during the 20-year period between 1960 and 1979, claiming that the diagnosis was made in 1 of 40,000 registered patients [8]. Uhlig and Johnson [1] also said that the total number of reported retrorectal tumor cases in all the major hospitals in the Portland area in Oregon State for 30 years was 63, seeing on average 2 patients in the metropolitan area every year. Nationally in Korea, Kim et al. [9] experienced 15 cases over 5 years, Cho et al. [10] saw 34 cases over 5 years and 8 months and Kwon et al. [11] reported 10 cases over 6 years and claimed the diagnosis had been made on average in 1.6 cases per year. These data show retrorectal tumors to be extremely rare, but Hobson et al. [2] reported that most surgeons, even surgeons who do not work in tertiary hospitals, would have such an experience at least once in their lifetimes. The classification by Uhlig and Johnson, which is modified from the classification by Lovelady and Dockerty, is widely used to classify retrorectal tumors as congenital, inflammatory, neurogenic, osteo and other [1,8]. The congenital tumor is the most common form of retrorectal tumor, and the developmental cyst, which the Generalizing reports with relatively large numbers of subjects published in the international literature to date, about 50%-70% of retrorectal tumors were congenital, and developmental cysts, being significant especially to surgeons, were found in about 30%-50% of all retrorectal tumors and in about 50-60% of all congenital tumors. However, it is difficult to predict the accurate prevalence of developmental cysts because there is a lack of research re-sults for large numbers of patients and because patients do not go to hospitals, about 40-50% of benign retrorectal tumors [13] and about 26-50% [1,3,6] of developmental cysts being reported to be asymptomatic. In Korea, 47 adult cases of retrorectal, presacral, or coccygeal developmental cysts were reported in surgical journals between 1990 and 2009, including the 15 cases reported in this study (Tables 3 and 4). Surgical literature on retrorectal developmental cysts reported by Korean authors over the last 20 years on national medical jour- nal portal sites on the internet was reviewed by using the search terms developmental cyst, retrorectal, presacral, precoccygeal, tailgut cyst, epidermoid cyst, dermoid cyst and teratoma, and the results were summarized. Developmental cysts were reported to be 2-15 times more prevalent in women than in men and were found in all age groups, including new-born babies and infants. The sex of the patient could be identified in 37 among the 47 adult cases, and female patients were more prevalent than male patients (32 vs. 5, respectively) with the ratio of 1:6.4 (M:F). The age could be identified in 35 cases, and the mean age was 40.3 years old (range, 21 to 74 years) and the diagnosis as made mostly in patients in their 20' s (11 cases, 31.4%), 40' s (9 cases, 25.7%) and 50' s (9 cases, 25.7%) ( Table 4). The reason for female patients being more prevalent is suspected to be the cyst's being accidentally found during regular obstetrics and gynaecology check-ups in childbearing-age women, but this suspicion was not proven [2,4,6]. Of retrorectal developmental cysts, 26-50% [1,3,6] are known to be asymptomatic, and the presented symptoms are known to be similar to those of perianal inflammatory diseases, such as anorectal abscesses, anal fistulae or pilonidal disease. Consequently, colorectal specialists who examine and treat adult benign rectal diseases in their private clinics are less likely to have experience with this disease and have limited diagnostic tools in their clinics so that even if the patients present with the symptoms, a misdiagnosis as common perianal inflammatory disease could be easily made unless the possibility of the disease is kept in mind. Inappropriate surgery can damage the sphincter muscle, and subsequent fecal incontinence and structural changes, such as adhesion after surgery, make a correct diagnosis more difficult [7]. Furthermore, structural changes may make the total excision difficult, leading to serious complications such as malignancy. Hence, the first doctor who examines a patient with a developmental cyst in the retrorectal area has the best opportunity to treat the disease [7]. The presented symptoms can vary depending on the size of the cyst. The most commonly reported symptom is indescribable dull pain in the perianal area; in addition, constipation, residual sensation, changes in stool diameter, bleeding, feeling of distension in the rectum, pain on the back or pelvic area, abdominal pain and anuresis are seen. When the cyst is infected, the patient can be pyrexial [14]. A loose sphincter muscle or loss of sensation in the perineum indicates invasion to the sacral nerve [4]. The disease can also lead to difficult delivery in childbearing-age patients by blocking the parturient canal [1]. Among the 47 cases reported in Korea, the existence or the absence of any symptoms could be confirmed in 35 cases, 29 (82.8%) being symptomatic and 6 (17.1%) being asymptomatic, slightly less than the figures reported in the literatures [1,3,6,13] (Table 4). The most important aspect for the diagnosis is accurate physical assessment, which aids in deciding on a surgical method, as well as the diagnosis. The diagnosis of a retrorectal tumor by physical assessment alone is reported to be made in 90-100% of the patients [2,8,15]. The diagnosis of 12 cases among the 15 cases (78%) in this hospital was made by history taking and physical assessment ( Table 2). During the physical examination, first of all, usually a soft prominence can be felt in the retrorectal area. The finger should be rotated to examine the whole area and to evaluate the size [1]. Caution should be taken because it may feel like only mucosal folds when the cyst is relaxed. The patient might complain of tenderness, and a feeling of pushing the liquid-fulled cyst might exist. Among our 15 cases, tenderness was expressed in 10 cases (66.7%). One of the patients (6th patient in Table 1) had had repeated surgeries after having been diagnosed as having a supralevator abscess or fistula. Even after investigations such as MRI had been performed, it was not diagnosed as a developmental cyst; the diagnosis was made retrospectively after the bubbling of gas was felt on the levator ani muscle during the follow-up. This was judged to be a collapsed cyst whose contents had been removed. Therefore, physical assessment is the single most important thing for the diagnosis. No prominent area was felt in the retrorectal area in another case (8th patient in Table 1, Fig. 2A), and the cause of this was thought to be the cyst' s being small and the contents of the cyst having been almost removed by a secondary fistula due to previous surgeries in the retrorectal area. In this case, a fistula was suspected, and a fistulectomy was performed, but the fistula was found to be connected to the presacral area during the fistulectomy. Subsequently, a precoccygeal cyst was identified after the coccyx had been removed. The diagnosis prior to surgical intervention will not be easy in cases like this. Even if no chronic inflammation or acute infections exist, indescribable pain around the perianal area, abdominal pain, and tenderness in the retrorectal area can be induced by the pressure of a large cyst, which can be misunderstood as symptoms of a supralevator abscess. If the cyst is misdiagnosed as supralevator abscess and is drained, the cyst will collapse, and its range will be difficult to judge. Also, surgeons wait for a fistula track to be formed after incision and drainage of the cyst, but when the cyst is filled again, it might be perceived as a recurrent supralevator abscess (4th, 6th, 7th, 8th, 9th and 12th patients in Table 1). However, supralevator abscesses are more prevalent in males, and patients commonly complain of anal pain with inflammatory response, such as fever, chill and increased leukocytes, because the abscess has already progressed extensively by the time the patient comes to the hospital. Thus, especially in female patients, a possible retrorectal developmental cyst should be considered when they complain of indescribable pain in the perianal area and tenderness of a retrorectal prominence without fever, chill or increased leukocytes. In addition, it is helpful to consider a possible retrorectal developmental cyst when patients complain of a relatively low level of pain or tenderness in the retrorectal area even when the amount of pus in the physical assessment is thought to be large. Only one case (14th patient in the Table 1 signs of infection, such as fever, chill or increased leukocytes, even though the patients complained of pain or tenderness in the retrorectal prominence or the perianal area. Pilonidal disease is prevalent in hairy male patients, and a lump is not palpable in the retrorectal area in the disease, so the cyst can be diagnosed. Secondarily, the existence or absence of a characteristic posterior anal funnel-shaped dimple (=fovea coccygea) and opening on the dimple in the midline, which was caused by a connection between the retrorectal developmental cyst and skin (Fig. 1), should be confirmed. These kinds of retrorectal excavations have been reported to have various rates of occurrence of about 30-100% in the literature [1,2,16]. Grandjean et al. [3] reported these findings as being typical of tailgut cysts. About 30-50% of developmental cysts are reported to be chronically infected [4,5], so caution should be taken because perianal pain and tenderness caused by inflammation of a developmental cyst can lead to a midline opening on the dimple being misunderstood as the secondary opening of a common perianal fistula. The opening of a developmental cyst has a 'congenital and dimpled look, ' but the opening secondary to a perianal fistula has an 'acquired look' (Fig. 1). The secondary opening usually develops from an infected anal gland and can be distinguished from the congenital opening of a developmental cyst because the secondary opening of a perianal fistula usually exists in the same level of the skin's surface, can develop in a non-specific area, such as the perianal area, perineum or the gluteal area, and can be accompanied by granulation tissues [17]. However, if the developmental cyst is initially misdiagnosed and the original posterior anal dimple and midline opening are removed, these lesions will be much less probable to be confirmed in later surgery, and the developmental cyst might be repeatedly misdiagnosed. If there is a perianal scar due to pervious repeated surgery, a developmental cyst should be suspected. This is important because if the cyst had been misdiagnosed as a perianal fistula or a supralevator abscess by the first doctor who performed a surgery, the next doctor might diagnose it as a recurring abscess because of preconception. The surgical histories of 25 of the 47 cases reported in Korea could be confirmed. Eleven of them (44%) had more than one surgery for the same lesion (Table 4). Hawkins and Jackaman [16] pointed out that a retrorectal developmental cyst should be suspected especially when female patients come to the hospital presenting histories of recurrent anal or rectal abscess and fistula, history of repeated surgery for these lesions, a fistula with a posterior anal funnel-shaped dimple even if it is not infected, a palpable lump in the precoccygeal or the prescral area, hairy or cheesy discharge from the anus or a perianal fistula. Spencer and Jackaman [17] concluded that a congenital developmental cyst should be suspected in patients with a recurrent retrorectal abscess, a repeated fistulectomy, presence of a fistula in the anus or the perianal area or rectum without identification of a primary focus in the dentate line of the anus, a posterior anal funnel- Fig. 2. (A) On magnetic resonance imaging (MRI; T2WI), a small lesion with a high signal density (arrow) was noted on the retrorectal area. The lesion was so small that it was preoperatively diagnosed as an anal fistula, but intraoperatively it was diagnosed as a developmental cyst. (B) On the MRI (T2WI), a multiloculated cystic lesion (big arrow) with a high signal density was noticed in the retrorectal area, and a secondary opening (small arrow) made by a previous operation was noticed (same patient as the one in Fig. 1B). shaped skin dimple and palpable fixed or distended lesion in the precoccygeal area. In terms of diagnostic investigation, the size of tumor, the cyst structure, the level of invasion to rectum, and lymphatic metastasis, if the cyst becomes malignant, can be confirmed by using transrectal ultrasonography [13]. With endoscopy, prominent mucosa in retrorectal area can be seen if the lesion is large enough, and the level of proximal extension can be confirmed as well [2]. A thin external wall and cystic tumor with septation can be seen on CT. With CT, malignancy should be suspected if there is a calcified cystic wall, and this calcification is more common in a dermoid cyst or a teratoma. Rarely can air inside the cyst be seen when a fistula is formed. MRI has become the most important investigative tool in recent years, replacing the other radiological examinations. MRI provides the location, the size and the characteristics of the tumor precisely, as well as vital information for aiding the decision on surgical treatment. In general, developmental cystis recognized as a cyst with low signal intensity and a distinctive thin external layer in theT1 weighted image and as a cysts with high signal intensity in the T2 weighted image. However, they can be recognized as having a high signal intensity even in the T1 weighted image if the contents of cyst are mucoid (tailgut cyst) or fatty (dermatoid cyst) (Fig. 2B). Even in the T1 weighted image, tailgut cysts can be recognized variously as having low to high signal intensity depending on the density and the viscosity of the mucinous material and the high-protein components and on the presence or absence of bleeding inside the cyst. The T2 weighted image is more useful to confirm smaller cysts and septation than the T1 weighted image or the CT image. Whereas other types of developmental cysts are monolocular, tailgut cysts usually have a multilocular structure or the structure of small cysts attaching to a bigger main cyst, so a different signal intensity can be seen in each locule of a multilocular cyst [18]. The cystic wall will become thicker when a developmental cyst is infected, and the borderline of the wall will become less clear as it progresses to malignancy [11]. Therefore, malignancy can be suspected if the wall of a cyst has an irregular, thickened look on the T1 and T2 images or on the contrast image. However, a comprehensive approach that considers medical history, symptoms and physical assessment, is important because a developmental cyst still can be misdiagnosed as a supralevator abscess or a complicated anal fistula despite a preoperative MRI (4th, 6th, and 8th patients in Table 1). Biopsy prior to an operation is usually not recommended because an accurate pathological diagnosis with a local biopsy is often impossible and because the biopsy can cause secondary infection or spread of cancer cells if the cyst is malignant. Therefore, a biopsy should be performed selectively for conservative management such as radiotherapy when sacral invasion of a malignancy that is not feasible to remove is suspected. Also, a biopsy via the rectum should be avoided to prevent cancer cells from spreading into the rectum, and a CT-guided extra-rectal approach or presacral approach is recommended [2,4]. Differential diagnoses that need to be eliminated are anal fistula, perianal abscess, pilonidal disease, other types of retrorectal congenital tumors, neurogenic tumors and osseous tumors. Consultation with a neurosurgeon or an orthopedic surgeon is essential if these diseases are suspected. The final pathological diagnosis and treatment can be done by removing the cyst completely. The retrorectal space is a potential area that appears only when the rectum is displaced in the anterior direction by a tumor and is the tumor borders the rectum in the anterior direction, the sacrum and the coccyx in the posterior direction, the peritoneal reflection in the superior direction, the levator ani muscle and the coccygeal muscle in the inferior direction, and the iliac vessels and the ureters on the left and the right side. In the retrorectal space are the sacral plexus, middle hemorrhoidal vessels, median sacral vessels and lymph nodes. This space can be generally approached with an anterior approach, a posterior approach and an anterior-posterior approach, depending on the size and the location of a tumor. There are differences in authors' opinions, but the posterior approach via the anus and perineum should be attempted when the size of a tumor is less than 5 cm and the whole tumor feels as if it is located in a relatively inferior direction. In the posterior approach, a Kraske operation and a paracoccygeal approach, which access a tumor after incising posterior to the anus and removing the coccyx, are usually used, but an intersphincteric approach, which accesses a tumor after opening the intersphincteric space and dividing the levator ani muscle, can also be used. Operational methods could be confirmed in 40 of the 47 Korean cases, and the posterior approach was used in 30 of those 40 (75%) (Tables 3 and 4). A conventional coccygectomy has been reported to be vital in securing enough visual field for complete excision during a posterior approach when severe adhesion to surrounding tissues caused by repeated operations or infection exists [2]. In this hospital, a posterior approach using a coccygectomy was used in 7 of our 15 cases (46.7%) (Fig. 3). When the cyst is adhered to the coccyx or becomes malignant and invades the coccyx, the residual cells can cause recurrence. Neoplastic cells are reported to develop easily in the coccyx; hence, a coccygectomy is essential to cure the disease, especially in patients with a teratoma [2]. A partial sacrectomy might be necessary in the case of malignant tumor invasion to the sacrum or be needed to secure a visual field (3rd patient in Table 1). A partial sacrectomy from under the S3 border can be performed without damaging the function of the anal sphincter. Removal above the S3 nerve root bilaterally can cause fecal incontinence, urinary incontinence and impotence [13,19]. Complete cyst removal was impossible in 2 of our 15 cases (13.3 %). The first case had a high risk of rectal perforation and already had relatively severe fecal incontinence prior to the surgery. The other case had a high risk of worsening fecal incontinence due to reduced anal pressure on anal manometry and unilateral delayed latency on pudendal nerve terminal motor latency (PNTML) prior to the surgery (1st and 4th patients in Table 1). Intersphincteric approach causes less damage to the surrounding tissues, enables to preserve the function of anal sphincter, prevents unnecessary damage to sacral nerve and induces less complications, such as urinary retention, because the intersphincteric space is embryologically avascular. However, it is only applicable to small cysts (10th, 13th, and 14th patients in Table 1). An anterior approach via the abdomen is recommended when the size of a tumor is about 5 cm and the upper border of the tumor is located above the 3rd sacrum and the lower border of the tumor does not reach the 4th sacrum whereas an anterior-posterior approach is recommended if a tumor is bigger than 5 cm and is formed largely across the sacrum or malignancy is suspected or severe adhesion exists [2][3][4]13]. However, a cyst can be removed completely with a posterior approach after the contents have been drained carefully even when the cyst is large as long as there is no evidence of infection or adhesion (1st and 9th patients in Table 1). A laparoscopic anterior approach can also be used safely under the proper visual field if no evidence of malignancy exists. Among the 47 Korean cases, histological examination results could be confirmed in 40 cases, and the cysts were mostly tailgut cysts (15 cases, 37.5%), epidermal cysts (13 cases, 32.5%), teratomas (7 cases, 17.5%), dermoid cysts (4 cases, 10%), rectal intussusceptions (1 case, 2.5%) ( Table 4). Looking at the histological characteristics of each cyst, a tailgut cyst develops when a true tail has failed in complete involution and the residual form cysts [3,6]. This consists of a squamous epithelium, a transitional epithelium, a glandular cilliated columnar epithelium, and a mucinous columnar epithelium because it has developed from a gastrointestinal precursor. Scattered smooth muscle fiber bundles can also be observed. It can be differentiated from a rectal intussusception which has two muscle layers that have the myenteric plexus of Auerbach After excision of the cyst (small arrow), a retrorectal space (big arrow) was exposed. (C) An iatrogenic secondary tract (arrow) that was made after a previous operation was excised. (D) A closed drainage catheter was inserted, and the wound was closed with interrupt sutures. (E) Excised cyst (big arrow) and secondary tract (small arrow) made by a previous operation. (F) A multiloculated cystic structure was noticed after dividing the specimen fixed with formalin. [20][21][22][23][24] (Fig. 4A). An epidermal cyst can develop due to closing a defect of the ectodermal tube; consequently, it is covered only with a squamous epithelium. The genesis of a dermoid cyst is similar to that of an epitheloid cyst, but the cystic wall includes mature dermoid adnexals, such as a sebaceous gland, a pilar cyst or a sudoriferous gland, as well as a squamous epithelium. A teratoma can have any type of tissue originating from three kinds of embryonic germ cells as it is developed from a totipotential cell [2]. Although a teratoma is usually found in children, it can be found in adulthood if it is too small or the symptoms are extremely mild. A rectal intussusception is connected with the rectum and includes structures such as mucosa with lamina muscularis mucosae, muscle layers, serosa, villous and crypt, as well as two clear muscle layers with the myenteric plexus of Auerbach [3]. Investigating the cyst's contents is essential because even if it is misdiagnosed as a supralevator abscess and an incision and drain is performed, repetitive misdiagnosis can be prevented as long as the contents have been investigated. The contents of a tailgut cyst do not have a foul odor unless it is infected, and in a tailgut cyst, cloudy clear, yellow, green or brown substances, including mucin, can be identified. The contents of tailgut cyst will be somewhat watery due to the large mucinous content, and even when it is infected, the contents will be distinguishable from the usual creamy foul odorous pus generated by liquefaction of fatty tissue around the anorectal area because the major content is mucin. The contents of the tailgut cysts excised at this hospital were watery and light yellow without foul odor (Table 1, Fig. 4B). The contents of epidermal cysts and dermoid cysts are smooth, cheesy-type substances and the contents can turn into pus when is the cysts are infected. Therefore, when the content looks like a smooth, cheesy discharge combined with pus, these types of cysts should be suspected. In a tailgut cyst, depending on the cells lining the cyst, it can develop into a malignant squamous cell carcinoma, an adenocarcinoma or a carcinoid tumor. One case of an adenocarcinoma and one case of a carcinoid tumor among tailgut cyst were reported in Korea [23,24]. The malignant change process of an adenoma is assumed to be related to a p53 gene mutation, similar to that of the dysplasia-adenocarcinoma sequence in colon cancer. As a tumor marker, increased alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) can be found in a teratoma and they are used in follow-up investigation. Increased CEA can be identified in tailgut cysts. Observations that tumor markers are increased in malignant tailgut cysts and decreased down after the excision indicate that a connection between these tumor markers and malignancy is probable, although these tumor markers do not always reflect malignancy [23,25]. A squamous cell carcinoma can develop in an epidermal cyst. A teratoma is usually benign in new-born babies, but the cyst has a higher risk of becoming malignant as a baby becomes older, and the cyst contains more solid components. A teratoma is also usually benign in adult patients, but 5-10% of teratomas become malignant if they are not cured [2]. Malignancy was not identified in the patient (11th patient in Table 1) who underwent reoperations due to recurrence. The authors followed the patients with endorectal ultrasonography and MRI, and the average follow-up period was 18.3 months (range, 1 to 54 months). Strong suspicion and physical assessment are the most important factors in diagnosing a retrorectal developmental cyst. The presence of a retrorectal developmental cyst should also be borne in mind when a proctologist examines adult female patients with a history of repeated surgery for an anal fistula, a soft retrorectal prominence, a funnel-shaped posterior anal dimple, an external opening in the retrorectal area without fever or chill, and a discharge that is somewhat watery, unlike the usual foul odorous creamy pus when a cyst is drained. Research with various examples is required in the future.
2014-10-01T00:00:00.000Z
2011-12-01T00:00:00.000
{ "year": 2011, "sha1": "181e943a06681ba677bfaff5e5b3cf83e7b933cd", "oa_license": "CCBYNC", "oa_url": "http://coloproctol.org/upload/pdf/jksc-27-303.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "181e943a06681ba677bfaff5e5b3cf83e7b933cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209444897
pes2o/s2orc
v3-fos-license
Formation and critical dynamics of topological defects in Lifshitz holography Zhi-Hong Li1,∗ Chuan-Yin Xia2,3,∗ Hua-Bi Zeng3,† and Hai-Qing Zhang1,4† 1 Center for Gravitational Physics, Department of Space Science, Beihang University, Beijing 100191, China 2 School of Science, Kunming University of Science and Technology, Kunming 650500, China 3 Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou 225009, China and 4 International Research Institute for Multidisciplinary Science, Beihang University, Beijing 100191, China Critical phenomena are of great importance in modern physics and very few widely applicable principles are known for systems far-from equilibrium [1,2]. In particular, understanding their critical dynamics in strongly coupled non-equilibrium phase transitions is extremely challenging [3]. Among these, Kibble-Zurek mechanism (KZM) is a paradigmatic theory to describe the critical dynamics of the spontaneous generation of topological defects as the system undergoes a continuous phase transition [4][5][6]. KZM has been tested and extended in various ways [7][8][9][10][11][12][13] (refer to [14,15] for reviews). A continuous phase transition is characterized by the divergence of the coherence length ξ and relaxation time τ near the critical point, in which ξ 0 and τ 0 are constant coefficients while ν and z d are the static and dynamic critical exponents respectively. 1 is the dimensionless distance to the critical temperature: ≡ 1 − T /T c . KZM assumes a linear quench of the temperature from normal state to symmetry-breaking state with (t) = t/τ Q , with τ Q the quench time. At the "freeze-out" time t freeze , where the rate of change imposed from the quench is comparable to the relaxation time τ , the system will adjust itself from nearly adiabatic to approximately impulse behavior, in which the order parameter effectively become "frozen". Thus, by identifying t freeze and τ one reaches Due to symmetry breaking, condensate of the order parameter will randomly distribute with each domain having the size ξ freeze = ξ 0 (τ Q /τ 0 ) ν/(1+νz d ) and picks up their own constant phases. Topological defects thus will form at the vertex of some adjacent domains if their phases satisfy the so-called "geodesic rule" [8]. Consequently, the resulting number density of defects in two-dimensional space can be estimated as Eqs. (2) and (3) are the central predictions of KZM. Gauge/gravity duality (AdS/CFT correspondence) is a "first-principle" means to study the strongly coupled field theories from weakly coupled gravitational theories in one higher dimensions [16]. Previous holographic studies on KZM can be found in [17][18][19]. To find various scaling exponents in KZM has become a prime and important subject recently [15]. In this paper, we investigate the holographic KZM in the background of a Lifshitz geometry with various Lifshitz exponents z, which is conjectured to describe a quantum critical point on the boundary [20][21][22]. The quantized magnetic fluxes (fluxoids) are spontaneously generated and trapped in the cores of the order parameter vortices, a typical feature of type II superconductor. By investigating the scaling laws in Eqs. (2) and (3), we find that at least at finite temperature, the Lifshitz exponents z in the bulk will not alter the dynamic critical exponents z d in the boundary. In particular, it remains as z d = 2. This surprising conclusion is in line with previous studies in [23,24] that at finite temperature, boundary field theory is like a mean field theory with z d = 2, irrespective of the bulk Lifshitz exponent z. Holographic setup -The gravity background we adopt is the AdS 4 black brane with Lifshitz exponent z in Eddington-Finkelstein coordinates, where f (u) = 1 − (u/u h ) 2+z , L is the AdS radius and u is AdS radial coordinate. Location of the AdS boundary is u = 0 while u h is the horizon. Hawking temperature of arXiv:1912.10450v1 [hep-th] 22 Dec 2019 the black brane thus is T = 2+z 4πL (L/u h ) z . Without loss of generality we rescale L = u h ≡ 1 in numerics. The line element (4) is invariant under the Lifshitz scaling t → λ z t; (u, x, y) → λ(u, x, y); The action we adopt is the commonly used Einstein-Maxwell-complex scalar action for holographic superconductors [25] We work in the probe limit by ignoring the backreaction of the matter fields to the gravitational fields. The ansatz we take is Ψ = Ψ(t, u, x, y), A t,x,y = A t,x,y (t, u, x, y) and A u = 0. At the horizon, we demand the regularity of the fields. Near the boundary u → 0, fields can be expanded as, the conformal dimensions of dual scalar operators on boundary. The asymptotic behavior of A t is more sophisticated depending on z, Following the AdS/CFT dictionary, Ψ 0 , a t and a i are the source of the dual operator O, chemical potential and superfluid velocity in the boundary, respectively. Their corresponding conjugate variables can be achieved by varying the renormalized on-shell actions with respect to the source terms from holographic renormalization [26]. In order to get finite on-shell actions, counter terms should be added. For z = 1, the counter term is , where γ is determinant of the reduced metric on the boundary while n µ is the normal vector perpendicular to the boundary. In order to get the dynamical gauge fields in the boundary, we impose the Neumann boundary conditions for the gauge fields as u → 0 [27,28]. Thus, the surface term C surf. = d 3 x √ −γn µ F µν A ν should also be added in order to have a well-defined variation. After doing these, one can get the finite renormalized on-shell action S ren. . Consequently, from the holographic renormalization we get the expectation value of the order parameter as O = Ψ 1 . We also impose Ψ 0 = 0 in order to have the U (1) symmetry spontaneously broken. Expanding the u-component of the Maxwell equations near boundary, we reach ∂ t b t + ∂ i J i = 0, which is exactly a conservation equation of the charge density ρ and current J i on the boundary. Since from the variation of S ren. , one can get b t = −ρ and with Λ an ultraviolet cut-off. Besides, the above surface term C surf. should also be added. Thus near u → 0 boundary we get the conservation equation From the dimensional analysis in holographic superconductor [25], increasing the charge density equals decreasing the temperature. Therefore, we need to know the mass dimensions of the charge density for different Lifshitz exponents z. Following [21], this is usefully implemented by assigning time and space the following dimensions of mass [t] = −z, and [ x] = −1. Thus, the temperature has dimensions [T ] = z and the charge density has mass dimension [ρ] = 2. In order to linearize the temperature near the critical point according to KZM, we ought to quench the charge density ρ as where ρ c is the critical charge density for the static and homogeneous holographic superconducting system. Numerical schemes -In this paper, we choose the Lifshitz exponents z in the bulk as z = 1, 2. Physically, this corresponds to relativistic and non-relativistic systems in the boundary, respectively. Besides, we would like to investigate the properties of dual scalar operators with the same conformal dimension, for convenience we set ∆ + = 3 as we vary z. Therefore, the mass squares are m 2 = (0, −3) with respect to z = (1, 2). Correspondingly, the critical charge densities for the static homogeneous superconductors are ρ c ≈ 7.5877 for z = 1 and ρ c ≈ 9.0445 for z = 2. We take advantage of the Chebyshev pseudo-spectral method with 21 grids in the radial direction u and use the Fourier decomposition in the (x, y)-directions since the periodic boundary condition along (x, y) was imposed. We thermalize the system by adding small random seeds in the normal state before quench. The reason is to make sure that the system before quench is in a symmetrical phase, which is the requirement of KZM. Different from putting the seeds on the boundary in [17,18], we add the random seeds of the fields in the bulk by satisfying the distributions s(t, x µ ) = 0 and s(t, x µ )s(t , x µ ) = ζδ(t − t )δ(x µ − x µ ) where (µ = u, x, y), with the amplitude ζ ≈ 10 −3 . 2 The system evolves by using the fourth order Runge-Kutta method with time step ∆t = 0.02 for z = 1 and ∆t = 0.0046 for z = 2. Filtering of the high momentum modes are implemented following the "2/3's rule" that the uppermost one third Fourier modes are removed [29]. Magnetic fluxoids and order parameter vortices -We quench the system by linearly decreasing the temperature through the critical point, then stop and keep the temperature at T = 0.8T c . t = 0 is the instant to cross the critical temperature T c . In the left panel of Fig.1, we show the magnetic fluxes generated from KZM as the system enters the final equilibrium state with τ Q = 1800 and Lifshitz exponent z = 2. In the right panel of Fig.1, the corresponding order parameter vortices are exhibited. The locations of the cores of the vortices are exactly the positions of the magnetic fluxes, which is a feature of type II superconductor. The profiles of a single vortex can be seen in Fig.2, in which we select the vortex at the location (x, y) ≈ (23, 26) for z = 1 (top row), and (x, y) ≈ (23, 22) for z = 2 (bottom row). Requirements of minimal free energy and periodicity of the phase of order parameter imply the quantization of magnetic flux [30], i.e., magnetic fluxoid with flux Φ c = 2πN where N is an integer. By integrating the magnetic flux of the single vortex numerically for z = 1(2) in the top row (bottom row) of Fig.2, we find the magnetic flux for this vortex is Φ c ≈ 6.11(6.18), which demonstrates the existence of quantized magnetic 2 Other relatively smaller magnitudes of ζ lead to similar results. In principle, ζ cannot be too large since the seeds play the role of perturbations to thermalize the system. fluxoid with Φ c = 2π (winding number N = 1). Other magnetic fluxes, for instance of vortices in Fig.1, are also checked to be quantized with N = ±1 vorticity. The width of the magnetic field λ can be fitted by B(r) ∼ B 0 e −r/λ with B 0 constant coefficient, while the width of the order parameter vortex is fitted by O(r) ∼ O(∞) tanh(r/( √ 2ξ)) with O(∞) the condensate value far from the vortex core [30]. From the the last column in Fig.2, we find λ ≈ 1.69 and ξ ≈ 0.88 for z = 1. Thus, the Landau-Ginzburg parameter κ z=1 = λ/ξ ≈ 1.92 > 1/ √ 2, which indicates a type II superconductor. For z = 2 we obtain the similar conclusion of the type II superconductor with κ z=2 = λ/ξ = 1.35/0.75 = 1.8 > 1/ √ 2. These results of type II superconductors are consistent with the appearance of the magnetic fluxoids in a holographic superconductor. Evolution of average condensate -Time evolution of the average value of order parameter O(t) from t = 0 (T = T c ) to the final equilibrium state (T = 0.8T c ) is exhibited in Fig.3. In the left panel with Lifshitz exponent z = 1, the black line is the instantaneous equilibrium value of the average condensate, while the colored lines from left to right correspond to the dynamical values of the average condensate under different quenches τ Q = 2000(green), 1400(red) and 1000(blue), respectively. Explanations for lines in the right panel with z = 2 are direct from the figure. From Fig.3, we see that in the beginning the dynamical values of condensate remain negligible, and lags behind the instantaneous equilibrium values. For instance of quench τ Q = 1000 in the left panel (z = 1), its dynamical value remains negligible until the lag time t L /τ Q ∼ 0.15, and then begins to scramble rapidly reaching the approximate equilibrium value at t/τ Q ∼ 0.23. This behavior, with t L larger than but proportional to t freeze , was reported as well in previous literatures [18,19,31]. For convenience, we list in Table.I the approximate values of t L for various quenches presented in Fig.3. From Table.I we see that for the same Lifshitz exponent, slower quench (bigger τ Q ) corresponds to longer lag time t L . This is consistent with Eq.(2) if the power νz d /(1 + νz d ) is positive. We will further discuss the relation between t L and τ Q in the next subsection, and indeed we will see there that Table.I In addition, we see that for the same Lifshitz exponent, the final equilibrium condensates under different τ Q 's are almost identical since the final temperatures are the same (T = 0.8T c ). Another interesting phenomenon is that for slower quench, for instance of τ Q = 2000 in the left panel of Fig.3 (z = 1), after the lag time the dynamical condensate rapidly grows, then catches up and coincides with the instantaneous equilibrium value (black line). This behavior was also reported in the past [17][18][19]. The end of this coincident growth (t/τ Q = 0.2) exactly corresponds to the end of quench, and the coincidence indicates an adiabatic region for the growth. However, for the fast quench (for example τ Q = 1000 with z = 1) there is no such kind of coincidence of the condensate before the end of quench. Vortex number density and "freeze-out" time -In the left panel of Fig.4, we show the relation between the num-ber density of vortices n and quench time τ Q under different Lifshitz exponents z. n was counted in the final equilibrium state. For z = 1, vortex numbers are almost the same (n ≈ 25) in the fast quench regime, which is consistent with previous results in condensed matter or holography [17][18][19]. For z = 2, vortex number density n ≈ 18 in the fast quench regime. However, for slower quench, the scaling laws between n and τ Q for z = 1 is roughly n = n 1 τ a Q with n 1 ≈ (703.9170 ± 1.1333) and a ≈ (−0.4998 ± 0.0162), in which the error bars stand for the standard deviations. For z = 2, this relation is n ≈ (312.9211 ± 1.2379) × τ As we have stated in the previous subsection, the lag time t L that defined as order parameter begins to grow rapidly can reflect the "freeze-out" time t freeze [18,31]. In numerics we operationally set t L as O ∼ 0.1 following [18,19,31]. On the right panel of Fig.4 we exhibit the relation between t L and τ Q . The error bars are not shown, since they are very tiny. We see that for fast quench the lag time is almost constant for z = 1(z = 2). However, for slow quench, one can read that t L ≈ 5.0540 × τ 0.4893 Q for the Lifshitz exponent z = 1 and t L ≈ 3.6064 × τ 0.4845 Q for z = 2. Therefore, from the two scaling relations in Eqs. (2) and (3), one can readily evaluate the dynamic critical exponent z d and the static critical exponent ν on the boundary as (z d ≈ 1.9579, ν ≈ 0.4893) for z = 1 and (z d ≈ 1.9532, ν ≈ 0.4812) for z = 2. The holographic results for z d and ν are very close to the mean-field theory values with z d = 2 and ν = 1/2. 3 Therefore, we see that the dynamic critical exponent z d (as well as ν) on the boundary is irrespective of the bulk Lifshitz exponent z. Conclusions -We investigated the spontaneous formation and time evolution of topological defects from KZM in Lifshitz holography. The magnetic fluxes were found to be quantized and belonged to the type II superconductor. From the time evolution of the average condensate, we extracted the values of the lag time, which could reflect the "freeze-out" time. The KZ scaling relations, i.e., vortex number density to quench time and the lag time to quench time matched KZM very well. These two scaling relations implied that the dynamic critical exponent on the boundary field theory was irrespective of the Lifshitz exponent in the bulk. This conclusion was in line with previous discussions in [23,24], in which the authors perturbed the fields around the critical point to study the critical exponents. In our paper, without any perturbations, we saw that by directly studying the formations of topological defects, we arrived at the similar results. According to discussions in [2,23], at least at finite temperature, critical dynamics is governed by the dynamics of the critical point itself rather than by the Lifshitz exponent z in the underlying geometry. Therefore, it will be interesting to study the critical dynamics at zero temperature in Lifshitz geometry. We leave it for future work.
2019-12-22T13:56:37.000Z
2019-12-22T00:00:00.000
{ "year": 2019, "sha1": "ef37c09cc51c55c6d57d31c5590352b0d1fdef0a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP04(2020)147.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "fee8113961ce7f9d5e31ec33b09ce33ba0394718", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53344893
pes2o/s2orc
v3-fos-license
From Nutrition to Health: The Role of Natural Products – A Review The word natural literally means something that is present in or produced by nature and not artificial or man-made (Spainhour, 2005). Although, many effective poisons are natural products (Schoental, 1965). When the word natural is used in written or verbiage form, many a times refer to something good or pure (Spainhour, 2005). Today, the term natural products are commonly understood to refer to herbs, herbal concoctions, dietary supplements, traditional medicine including Chinese traditional medicine, or other alternative medicine (Holt, and Chandra, 2002). In general, natural products are either of prebiotic origin or originate from microbes, plants, or animal sources (Nakanishi, 1999a; Nakanishi, 1999b). As chemicals, natural products include such classes of compounds as terpenoids, polyketides, amino acids, peptides, proteins, carbohydrates, lipids, nucleic acid bases, ribonucleic acid (RNA), deoxyribonucleic acid (DNA), and so forth. Natural products are an expression of organism’s increase in life complexity by nature (Jarvis, 2000). 202 The prevalence of undernutrition in today's world varies greatly base on region and country, there has also been a decrease in global trends in wasting and stunting. However, the situation is still different in countries with extremely unstable governments and those with civil strife, with records of higher prevalence rate up to 20-30% (Bryce et al., 2008). Undernutrition in developed nations happen to be a problem among people living in rural areas, this could be due to the fact that inhabitants of rural areas mostly suffer from poor nutrition in terms of micronutrient deficiencies. Low income and poor access to nutritious foods is a common factor in poor urban societies leading to undernutrition (Shetty, 2009;Black et al., 2008). The prevalence of undernutrition remains moderate to high in developing nations, depending on the relative degree of economic development. These countries have relatively higher prevalence rate of children wasting and stunting of approximately 30-40% (Gutierrez-Delgado and Guajardo- Barron, 2009; Delisle, 2008). Natural products and body anatomy Most of the body structures like bones and muscle tissues are formed and nourished by natural products like calcium, phosphorus, vitamin D, proteins and so forth. Bone formation consists of a biological cascade through mesenchymal proliferation, chondrogenesis, osteogenesis, and remodelling (Reddi, 1994). For optimal bone mineral accrual in the developing skeleton, calcium and vitamin D are very important. Human skeleton is rich in calcium supply, with finely tuned mechanisms for release of calcium as needed. Calcium homeostasis is maintained during either low calcium intake or vitamin D status, through the regulation of parathyroid gland and kidneys, at the expense of bone. Adolescents are at risk for poor nutritional status in both calcium and vitamin D. Lack of calcium accumulation in the skeleton of an adolescent or a growing child can have negative consequences for achievement of peak bone mass (Bailey et al., 2000;Bachrach, 2001; Harkness and Bonny, 2005). Differentiation of mesenchymal stem cells into osteoblast to produce new bone tissue was capably induced by bone morphogenetic proteins (BMP), a phenomenon known as osteoinduction (Urist, 1965;Wozney, 2002). Growth factors contained in platelet-rich plasma (PRP) have been proposed to enhance bone grafts maturation and to support repair in the treatment of small bone defects in maxillofacial surgery, when in combination with an organic bovine bone (Roldan et al., 2004). The composition of collagens and noncollagenous matrix proteins defined the organic phase of mineralized tissues. Bone, dentin, and cementum contains collagen type I, cartilage contains collagen type II, and enamel is virtually free of collagen (Sommer et al., 1996). Proteins are the most important nutrients for maintaining body structures, they are the major component of muscles, it is generally believe that flesh makes flesh (Bischoff and Voit, 1860). Skin and bone contain a fibrous protein. Keratinocytes undergo a series of morphological and biochemical changes including the expression of large quantities of proteins which constitute cytoplasmic filamentous networks, keratohyalin granules and cornified envelopes in the course of differentiation (Manabe et al., 1997). Stiffness and rigidity to fluid biological components is provided by structural proteins most of which are fibrous proteins, actin and tubulin are globular and soluble as monomers, which upon polymerization form long, stiff fibres that comprise cytoskeleton, which allows the cell to maintain its shape and size. Connective tissue like cartilage has collagen and elastin as its main components, and keratin is an important component of hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells (Van Holde and Mathew, 1996). Cell signalling Natural products play important roles in a lot of cellular activities such as cell signalling that aids in cellular communications. Calcium and vitamin D are necessary for many cellular processes. Primarily, the role of calcium is to serve as a second messenger in virtually all cells. The most common signal transduction is the ionized calcium due to its ability to reversibly bind to proteins. Vitamin D receptors have been identified in most body cells such as the small intestine, colon, brain, skin, prostate, gonads, breast, lymphocytes, osteoblasts, B-islet cells, and mononuclear cells (Holick, 2004). At intracellular level, 1,25dihydroxyvitamin D interacts with vitamin D receptors and retinoic acid X receptor to enhance or inhibit the transcription of vitamin D-responsive genes, including calciumbinding protein. Stimulation of many noncalcemic physiological functions including insulin production, thyroid hormone secretion, and activated T and B lymphocyte function has been shown to be promoted by 1, 25-dihydroxyvitamin D (Harkness and Bonny, 2005). Apart from Vitamin D and calcium, Proteins are also involved in several cellular processes. Being the chief actors within the cell, proteins are said to be carrying out the duties specified by the information encoded in genes (Lodish et al., 2004). Most of the biological molecules are relatively inert elements upon which proteins act, with the exception of certain types of RNA (Voet and Voet, 2004). Within the cell proteins acts as enzymes that catalyse chemical reactions. Due to specific nature of enzymes they accelerate only one or few chemical reactions. Most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription are carried out by enzymes (Bairoch, 2000). Cell signalling and signalling transduction are among the numerous processes involved by many proteins. Insulin is a good example of extracellular proteins that transmit a signal from a cell in which it was synthesized to other cells in distant tissues. Some proteins are membrane proteins that act as receptors, which function mainly to bind a signalling molecule and induce a biochemical response in the cell (Branden and Tooze, 1999). The protein components of adaptive immune system are the antibodies, whose main function is to bind antigens or foreign substances in the body, and target them for destruction. Antibodies are either secreted into the extracellular environment or anchored in the Electrophysiology Electrophysiological properties of some natural products serve as a means of communication or cell signalling in several body cells and glands. Many cellular functions, such as electrical signal generation in nerve and muscle cells, contraction in muscle cells, and secretion in nerve and gland cells depends on the significant role of widely distributed voltage-dependent calcium channels (Hagiwara, 1983;Hagiwara and Byerly, 1981). They exist not only in fully differentiated cells but also already in oocytes (Okamoto et al., 1977) and in developing nerve and muscle cells (Spitzer, 1979). There are several reports that calcium channels are restricted to, or more prominent in, the less differentiated states of excitable cells such as skeletal cells (Kano, 1975;Kidokoro, 1973;Kidokoro, 1975) and nerve cells (Matsuda et al., 1978;Mori-Okamoto et al., 1983;Spitzer and Baccaglini, 1976). The involvement of calcium channels as well as sodium channels in generating action potentials in embryonic chick skeletal muscle cells has been established. (Fukuda et al., 1976;Kano, 1975;Kano and Yamamoto, 1977). The involvement of a chloride component has also been shown in addition to the sodium and calcium component of the action potential, in particular in the long-lasting plateau phase of the action potential in these muscle cells (Fukuda et al., 1976).The availability of ATP in mammalian neurones is due to its wellknown role as a major energy carrier for cellular metabolism and has been reported to act as a fast transmitter in mammalian brain (Edwards and Gibb, 1993). An action potential is a transient depolarization of the membrane potential of excitable cells. They serve two main functions: to transmit and encode information, and to initiate cellular events such as muscular contraction. An action potential results from a transient change to the properties of the cell membrane, from a state where it is much more permeable to K + than Na + , to a reversal of these permeability properties. Thus during the action potential an influx of Na + is responsible for the rapid depolarization and an efflux of K+ causes repolarization. Changes to membraneionic permeability are due to the opening and closing of voltage-gated ion channels, and the properties of such channels explain additional phenomena such as refractoriness, threshold and cellular excitability. Action potentials conduct with a finite velocity along nerve axons, and the actual velocity depends on a number of factors that include: fibre radius, temperature, functional ion channel number and the presence of a myelin sheath (Fry and Jabr, 2010). The cations Na + , K + , Mg 2+ , and Ca 2 , are involved in the propagation of nerve impulses and in muscle and heart contraction. Phosphorus (P), in phosphoric ester form (ATP), results from the third step of cell respiration. The active transport of Na + and K + through the plasma membrane involves the energy of ATP hydrolysed by Na + K + ATPase and activated by Mg 2+ , constituting an essential cell function requiring around 25% of the energy metabolism of man at rest (Lehninger et al., 1994). The formation and the use of energy-rich bonds require Mg 2+ (Durlach et al., 2000). Muscle contraction Although there are hormonal and neuronal interplay, diverse functions in the body like movement, respiration, digestion, blood circulation, heartbeat, micturition, parturition etc. www.intechopen.com From Nutrition to Health: The Role of Natural Products -A Review 205 are facilitated by muscle contraction. Communication between muscle cells that lead to muscle contraction results from formation of action potential which is due to the electrophysiological properties of these tissues. It is an established fact that all muscle fibres use Ca 2+ as their main regulatory and molecule signalling. Therefore, the variable expression of proteins involved in Ca 2+ signalling and handling play a key role in the contractile properties of muscle fibres. Contraction and relaxation properties of a muscle fibre are largely determines by molecular diversity of the main proteins in the Ca 2+ signalling apparatus otherwise known as the calcium cycle. The Ca 2+ signalling apparatus includes: the ryanodine receptor that is the sarcoplasmic reticulum Ca 2+ release channel, the troponin complex that mediates the Ca 2+ effect to the myofibrillar structures leading to contraction, the Ca 2+ pump responsible for Ca 2+ reuptake into the sarcoplasmic reticulum, and calsequestrin--the Ca 2+ storage protein in the sarcoplasmic reticulum. In addition, a multitude of Ca 2+ binding proteins is present in muscle tissue including paryalbumin, calmodulin, S100 proteins, annexins, sorcin, myosin light chains, -actin, calcineurin, and calpain. These Ca 2+ binding proteins may either exert an important role in Ca 2+ triggered muscle contraction under certain conditions or modulate other muscle activities such as protein metabolism, differentiation, and growth. Muscle diseases have been shown to be associated with alteration of several Ca 2+ signalling and handling molecules. Pathophysiological conditions like malignant hyperthermia, dystrophinopathies and Brody's disease are seem to be associated with functional alterations of Ca 2+ handling. These also underline the importance of the affected molecules for correct muscle performance (Berchtold et al., 2000). Body electrolytes and homeostasis The major electrolytes found in the body are sodium (Na + ), potassium (K + ), calcium (Ca 2+ ), magnesium (Mg 2+ ), chloride (Cl -), bicarbonate (HCO 3 -), phosphate (HPO 4 2 ), and sulphate (SO 4 2 ) (Ahmad and Ahmad, 1993). For humans to be in an adequate physical condition and a highly efficient state, stable volume, osmotic concentration and electrolyte composition of internal fluids are necessary prerequisites (Zorbas et al., 2002). The macroelements Ca, Mg, K, Na, and phosphorus (P) are generally integrated into anatomic structures (bone elements, nucleic acids, membranes, proteins, enzymes) although they are also involved in the ionized active form and regarded as essential trace elements, as in voltage-gated ionic channels (Allain, 1996). In active form, they are of particular importance for metabolic balance in sports and during physical exercise (Maughan, 1999). Na + contributes to the maintenance of osmotic pressure, water regulation, and acid-base balance. Ca 2+ controls vascular tonicity and coagulation of the blood (Lehninger et al., 1994). Blood composition Among the components of blood are protein (albumin, globulin, and fibrinogen), fat cholesterol, carbohydrate glucose, calcium, phosphorus, sodium chloride (NaCl), urea, uric acid, nonprotein nitrogen (N.P.N) compounds, and creatine. These natural products component are distributed by the blood to body cells and tissues for necessary physiological activities. Red blood cells maturation Red blood cells formation is important for maintaining normal red blood cells count and blood volume. The erythropoietic cells of the bone marrow are among the most rapidly growing and reproducing cells in the entire body, due to the continuing need to replenish red blood cells. Their maturation and rate of production are affected greatly by a person's nutritional status. For final maturation of the red blood cells two vitamins, vitamin B12 (cyanocobalamin) and folic acid are important. Both of these are essential for the synthesis of D N A , b e c a u s e e a c h i n a d i f f e r e n t w a y i s required for the formation of thymidine triphosphate, one of the essential building blocks of DNA. Abnormal and diminished DNA and, as well as failure of nuclear maturation and cell division could be caused by lack of either vitamin B 12 or folic acid (Guyton, 2006). Haemoglobin formation Haemoglobin serves as oxygen transportation medium through the formation of oxyhaemoglobin in the blood, which is distributed to other body cells and tissues. Maintenance of this role is achieved by maintaining the normal haemoglobin count. Initially, to form a pyrrole molecule, succinyl-CoA formed in the Krebs metabolic cycle, binds with glycine. In turn, to form the heme molecule, four pyrroles combine to form protoporphyrin IX, which then combines with iron. Finally, each heme molecule combines with a long polypeptide chain, a globin synthesized by ribosomes, forming a subunit of haemoglobin called a haemoglobin chain. Four of these chains in turn bind together loosely to form the whole haemoglobin molecule. The different types of chains base on the amino acid composition of the polypeptide portion are designated alpha chains, beta chains, gamma chains, and delta chains. The most common form of haemoglobin in the adult human being, haemoglobin A, is a combination of two alpha chains and two beta chains (Guyton, 2006). Heredity The role of natural products like deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) in heredity is too important to be overlooked. It is a widely accepted hypothesis that deoxyribonucleic acid (DNA) is the genetic carrier of information and that ribonucleic acid (RNA) is an essential component in the expression of this information in polypeptide synthesis (Hurwitz and August, 1963). DNA molecule is a repository of genetic information. Therefore, there must be some precise mechanism for duplicating DNA so that the information within it can be handed down unchanged from one generation to the next. Inherited information resides in the precise sequence of bases in DNA and that this information is transferred to messenger RNA which can then specify the sequence of amino acids in some particular protein (Davern and Cairns, 1963). Enzymes and hormones synthesis Almost all process in a biological cell needs enzymes to occur at significant rates, the set of enzymes made in a cell determines which metabolic pathways occur in that cell. Pancreatic adaptation to the diet is a phenomenon due to the dietary induced modifications in enzyme composition of pancreatic tissue and secretion, which has been described in many species (Poort and Poort, 1980;Corring, 1977; Ben Abdeljilil and Desnuelle, 1964; Reboud et al., 1966). A carbohydrate-rich diet results in an increase in the specific activity of amylase with a concomitant decrease in the specific activity of chymotrypsinogen. The converse is true for a protein-rich diet (Ben Abdeljilil and Desnuelle, 1964) and the same phenomenon has been described for the lipase-colipase system (Bazin et al., 1978). Such modifications of pancreatic content are now thought to be induced by changes in the biosynthetic rate of individual enzymes rather than by non-parallel secretion (Palade, 1975;Dagorn, 1978) or differential enzyme catabolism (Kramer and Poort, 1972). Changes in the individual rates of enzyme biosynthesis have been shown to occur in the developing embryonic pancreas (Kemp et al., 1972) and also after 30 days (Reboud et al., 1966) and more recently after 5 days of dietary adaptation in the adult rat (Poort and Poort, 1980). These long-term adjustments in enzyme synthesis have been correlated to concomitant adaptive modifications in pancreatic content (Reboud et al., 1966). However, no data are available to extend these conclusions to the short-term modifications in enzyme content that have been reported to occur within 24 hours (Deschodt-Lanckman et al., 1971). It became evident that rapid modulation of pancreatic enzyme synthesis was possible. Hormonal stimulation (Dagorn and Mongeau, 1977) or enteral administration of a product of digestion produced changes in the biosynthetic rates of amylase, chymotrypsinogen and lipase within 15-30 min. It thus becomes possible that a meal may have an immediate regulatory function on pancreatic enzyme synthesis (Dagorn and Lahaie, 1981). The entire organs functions are controlled by hormones; affecting diverse processes as growth and development, reproduction, and sexual characteristics. Energy storage and uses are also influence by hormones as well as controlling the volume of fluid and levels of salts and sugars in the blood. Large responses in the body could be triggered by very small amounts of hormones. Secretin and glucagon are members of a family of peptides, the vasoactive intestinal polypeptide (VIP)-secretin-glucagon family, which also includes pituitary adenylate cyclase-activating polypeptide (PACAP), gastric inhibitory polypeptide (GIP), parathyroid hormone (PTH), growth hormone-releasing hormone (GHRH), and exendins (Chow et al., 1997;Paul and Ebadi, 1993). All these peptides possess a marked amino acid sequence homology, are widely distributed in the body, and exert pleiotropic physiological effects, in many instances acting in a paracrine manner. The effects of these peptides are initiated by their specific interaction with cell-surface receptors, belonging to the superfamily of G-proteins-coupled receptors. These receptors are glycoproteins with a large hydrophilic extracellular domain followed by 7 highly conserved hydrophilic transmembrane helices, and their signalling mechanism primarily involves the activation of adenylate cyclase (AC)/protein kinase A (PKA) and phospholipase C (PLC)/PKC cascades ( Natural products as nutritional supplements Apart from being the major sources of nutrition in form of carbohydrate, protein, fat, etc. natural products are also used as nutritional supplements. According to the Dietary Supplement Health and Education Act of 1994 of the United States (Public Law 103-417, DSHEA), a dietary supplement is a product that is meant to supplement one's diet. Dietary supplements contain one or more of the following ingredients: a vitamin, a mineral, an herb or other botanical, an amino acid, or another dietary substance, or a combination of these ingredients or their extracts. By definition, a dietary supplement is intended for ingestion in pill, capsule, tablet, or liquid form, but it is not for parenteral use. The most commonly used dietary supplement products are: enchinacea, ginseng, ginkgo, garlic, glucosamine, st. John's worth, peppermint, fish oils/omega-3 fatty acids, ginger, soy and so forth (Low Dog and Markham, 2007). Elemental analysis of garlic indicated that the powdered plant material contained mainly potassium, phosphorus, iron and calcium among others. While it's phytochemical screening revealed presence of chemical compounds like saponins, steroids, tannins, carbohydrates and cardiac glycosides (Mikail, 2010). Vitamin/mineral supplements can be defined as products that are formulated to supply vitamin and/or mineral nutrients. They are often categorized as vitamin A, vitamin E, vitamin B-complex, multi-vitamins, calcium (Ca), iron (Fe), and multi-vitamins with mineral supplements (Kim, 1997;Kim-Park et al., 1991). In general, it has been found that school children select multi-vitamins with minerals and multi-vitamin supplements more frequently than other types of supplements (Bowering and Clancy, 1986). Vitamin/mineral supplement use has been reported to be influenced by several factors. With respect to young children, daily eating habits can be a particularly significant factor affecting supplement use, as mothers of children often adopt vitamin/mineral supplements as an insurance against possible poor or unbalanced meals. Supplements are also often given to promote appetite in young children (Song and Kim, 1998). This is perceived to be an important issue, as it is thought that young children have numerous eating problems including skipping meals, eating small meals and a strong dislike for some foods (Pipes, 1992). According to demographic characteristics, females, individuals in high socioeconomic categories, and individuals living in large cities tend to take vitamin/mineral supplements more often than their contrasting groups (Bowering and Clancy, 1986 Natural products roles in sanitation and personal care/cosmetics Not only use as nutritional supplements, rather natural products have interesting roles in sanitation, personal care and cosmetic surgery. Large variety of products and formulations are considered as personal care products in the United States (US) and cosmetics in the European Union (EU), these products includes soaps, shampoos and shower products, sunscreens, skin and hair care products, hair dyes, make ups, lip sticks, toothpastes, dental care products, deodorants, personal hygiene products and many others (Antignac et al., 2011). Decorative cosmetics are principally used to beautify or cover minor, visible imperfections. Shiny, oily, inhomogeneous colourings, as well as slight imperfections on skin surface are corrected by these kinds of cosmetics. These products play an important role, creating the effect of youthfulness and wholesomeness which are becoming more and more important in our society today (Valet et al., 2007). Personal care products (PCP) from botanical ingredients include a variety of preparations, such as plant extracts, expressed juices, tinctures, waxes, vegetable oils, lipids, plant carbohydrates, essential oils, as well as purified plant components, such as vitamins, antioxidants or other substances with biological activity (Allemann and Baumann, 2009). For thousands of years, soap has been used as probably the oldest skin and cloth cleanser. Soap is produced from the saponification of fats and oils by alkali. The manufacturing process of soap involved saponification by which triglycerides (fats and oils) or fatty acids are transformed into the corresponding alkali salt mixtures of fatty acids (Friedman and Wolf, 1996). From the pulp industry soft and liquid soap are prepared from tall oil as tall resin byproducts (Rappe et al., 1990). Products designed to improve the appearance of the aging face by altering the structure and function of the skin in ways that are important for cosmetic surgeons are termed cosmeceutics. Thus, cosmeceutics aids cosmetic operations, these products include alpha hydroxy acids, beta hydroxy acids, polyhydroxy acids, vitamins, retinoid, skin lightening agents, and sunscreens (Draelos, 1999). Natural products as sources of drugs Natural products have played an important role throughout the world for thousands of years in human diseases treatments and preventions. Natural product medicines have come from various source materials including terrestrial plants, terrestrial microorganisms, marine organisms, and terrestrial vertebrates and invertebrates . An analysis of the origin of the drugs developed between 1981 and 2002 showed that natural products or natural product-derived drugs comprised 28% of all new chemical entities (NCEs) launched onto the market (Newman et al., 2003). In addition, 24% of these NCEs were synthetic or natural mimic compounds, based on the study of pharmacophores related to natural products . This combined percentage (52% of all NCEs) suggests that natural products are important sources for new drugs, are also good lead compounds suitable for further modification during drug development (Chin et al., 2006). Natural products can come from anywhere. People most commonly think of plants first when talking about natural products, but trees and shrubs can also provide excellent sources of material that could provide the basis of a new therapeutic agent. Animals too, whether highly developed or poorly developed, whether they live on land, sea, or in the air can be excellent sources of natural products. Bacteria, smuts, rusts, yeast, moulds, fungi, and many other forms of what we consider to be primitive life can provide compounds or leads to compounds that can potentially be very useful therapeutic agents (Spainhour, 2005). Some of the provision of nature to human kind over the years includes the tools for the first attempts at therapeutic intervention (Nakanishi, 1999a;Nakanishi, 1999b). The Nei Ching is one of the earliest health science anthologies ever produced and dates back to the thirtieth century BC (Nakanishi, 1999a;Nakanishi, 1999b). Some of the first records on the use of www.intechopen.com Phytochemicals -Bioactivities and Impact on Health 210 natural products in medicine were written in cuneiform in Mesopotamia on clay tablets and date to approximately 2600 BC (Cragg and Newman, 2001a;Cragg and Newman, 2001b;Nakanishi, 1999a;Nakanishi, 1999b). Indeed, many of these agents continue to exist in one form or another to this day as treatments for inflammation, influenza, coughing, and parasitic infestations (Holt and Chandra, 2002;Spainhour, 2005). The best known natural products documentation is the Ebers Papyrus, which documents nearly 1000 different substances and formulations, most of which are plant-based medicines (Nakanishi, 1999a;Nakanishi, 1999b;Spainhour, 2005). More also, to date natural products continue to be the potential sources of new compounds or molecules that awaits further scientific elucidations, like the newly isolated alkylresorcinols from Urginea indica . The World Health Organization estimates that approximately 80% of the world's population relies primarily on traditional medicines as sources for their primary health care (Farnsworth et al., 1985). Over 100 chemical substances that are considered to be important drugs are either currently in use or have been widely used in one or more countries in the world have been derived from a little under 100 different plants. Approximately 75% of these substances were discovered as a result of chemical studies focused on the isolation of active substances from plants used in traditional medicine (Cragg and Newman, 2001a;Newman, 2001b, Spainhour, 2005). A lot of natural products medications are derived from polyketides, which includes antibiotic, antifungal, anticancer, anthelmintic and immunosuppressant compounds such as erythromycins, tetracyclines, amphotericins, daunorubicins, avermectins, and rapamycins. These derivatives are used in treatment of host of disease situations affecting various body systems such as central nervous system, cardiovascular system, renal system, visual system and common integument. Natural products as sources of antibiotics Antibiotic is a word originally coined for those natural compounds with antimicrobial properties ( Natural product-derived antibacterial drug prevalence may be due to the evolution of secondary metabolites as biologically active chemicals which conferred some selectional advantages to the producing organisms. There is likelihood of natural products to have evolved to penetrate the cell membranes and interact with specific proteins targets (Stone and Williams, 1992). The structural complexity of many natural products is required for the inhibition of many antibacterial protein targets (Butler, 2004;Koehn and Carter, 2005;Butler and Buss, 2006). Antivirals derived from natural products Numerous compounds have been revealed through antiviral testing from structural classes that include peptides, terpenoids, polysaccharides, steroids and alkaloids that potentially inhibit both RNA and DNA viruses (Abad Martinez et al., 2008). Infections with viruses are counteracted by the host natural defences which prevent or limit the extent of these diseases. Interferon, a protein moiety produced in virus infected cells provides one of these defences. Interferon causes the production of a new protein that shuts off virus replication when attached to the cell membrane and consequently may lead to suppression of the viral infection. Therefore, antiviral compounds could be produce by the synthesis of drugs that induce interferon production before or after the infection in the sense that they activate certain host defence mechanisms (Becker, 1980). Marine compounds are good sources of Pharmacological agents. In the market today, there are over 40 pharmacological compounds, including alternative antiviral medicines or those being tested as potential antiviral drugs at preclinical and clinical stages (Yasuhara-Bell and Lu, 2010). Some of the marine-derived antiviral agents circulating in market and are on clinical development include: Acyclovir, Ara-A (vidarabine), Ara-C (cytarabine), Avarol, Azidothymidine (zidovudine), and Cyanovirin-N (Yasuhara-Bell and Lu, 2010). Triterpenoids isolated from plants are biologically active natural products attracting considerable interest due to their variety of structures and their broad range of biological activities. Some compounds having significant anti-tumor activities in an in vivo assay have been reported. Some of these compounds are useful in the development of novel drugs with pharmacological actions (Barquero et al., 2006). Triterpenoidal saponins family of which Avicins is a member reduce both oxidative and nitrosative cellular stress, which result in developmental suppression of malignancies and other related diseases (Haridas et al., 2001). Milk has been reported to contain antiviral agents (Matthews et al., 1976;Newburg et al., 1992). Lactoferrin is one of such agents that later shown to in vitro inhibit the human immunodeficiency virus (HIV-1), human herpes simplex virus (HSV-1 and -2), human cytomegalovirus, respiratory syncytial virus, poliovirus and rotavirus (Marchetti et al., 1999). Assayed chemically modified proteins presented antiviral activity against HSV-1 before, during and after infection. Higher concentrations of modified proteins are required if present before infection as compared to during or after infection. This therefore, suggests that targeted chemical modification of some natural products might provide antiviral compounds effective against HSV-1 infection (Oevermann et al., 2003). Antiprotozoal potentials of natural products Parasitic diseases are major public health problem especially in tropical developing countries, affecting hundred millions of people (Tagboto and Townson, 2001). During phagocytosis reactive oxygen species are generated by neutrophilic granulocytes as a means of natural defence against invading microorganisms (Baehner et al., 1982). It is believed that this oxygen radicals formed by electron transfer processes have significant role in xenobiotic mechanism of action (Eberson, 1985;Halliwell and Gutteridge, 1985). Some antiprotozoal agents have been tested to possess this form of mechanism of action (Kovacic et al., 1989) although others may act in different ways. Among the established antiprotozoal drugs from natural sources used in treating human protozoan infections are quinine from Cinchona species, artemisinin from Artemisia annua for malarial treatment and Psychotria ipecacuanha for treating amoebiasis (Tagboto and Townson, 2001). Three alkaloids namely quinidine, cinchonine, and cinchonidine together with quinine have significant antimalarial activity. All of these compounds were isolated from Cinchona trees. Totaquine is an antimalarial agent containing all the four alkaloids used in the past as a cheap alternative to quinine sulphate (Dobson, 1998). Seven out of 14 antiparasitic drugs approved from 1981-2006 are natural products derivatives including artemisinin (Newman and Cragg, 2007). Apart from the established antiprotozoal drugs, natural products still possess the potentials of providing more alternative sources of antiprotozoal medications that needs further scientific elucidations. Allium sativum has been shown to possess trypanocidal activity both in vitro and in vivo (Mikail, 2009a). There are several other plants that possess this activity (Mikail, 2009b). Antifungal property of natural products Antifungal drugs are used in treating any of the following disease conditions: allergic reactions to fungal proteins, toxic reactions to toxins present in certain fungi and infectious mycoses which is the most serious and difficult to diagnose and treat due to the fact that mycoses come in many forms (Barret, 2002). Polyene natural product amphotericin B is the most commonly drug used in treating these disease conditions (Gallis et al., 1990;Wingard et al., 1999). Other various newer lipid formulations are also used in handling such disease conditions (Hiemenz and Walsh, 1996). Griseofulvin which was first isolated from the fungus Penicillium griseofulvin has been used in the treatment of dermatophyte infections for the past 30 years (Finkelstein et al., 1996). Polyene antifungal antibiotic nystatin, is used for prophylaxis and treatment of candidiasis of the skin and mucous membranes (Waugh, 2008). In Brazil, many plants biomes, such as the Cerado (savannah), the Atlantic and the Amazon rain-forest, have been used in the treatment of several tropical diseases, such as leishmaniasis, malaria, schistosomiasis, fungal and bacterial infections. These are mostly used by local populations as natural medicines (Alves et al., 2000). Anthelminthic activity of natural products More than 1 billion people are reported by the World Health Organization to suffer from neglected tropical diseases such as helminthiasis. This disease condition is a major health problem throughout developing countries and is also a food safety issue worldwide (Savioli, 2009). Th2 immunity is the key for protective immunity to all helminths, although the final effector mechanisms for helminths expulsion are distinct for each helminths, which could be due to the different invasion strategy of each helminth (Shigeo Koyasu et al., 2010). Worm expulsion is dependent on Th2 immune responses. Critical for worm expulsion are Th2 cytokines, IL-4 and IL-13. Both of these cytokines significantly delays worm expulsion (Finkelman et al., 2004). Anthelmintic act rapidly and selectively on neuromuscular transmission of nematodes, agonism at nicotinic acetylcholine receptors of nematode muscle and cause spastic paralysis, organophosphorus cholinesterase antagonism, increasing the opening of glutamate-gated chloride (GluCl) and produce paralysis of pharyngeal pumping, increasing calcium permeability, while other anthelmintics have a biochemical mode of action (Martin, 1997). Ivermectin was discovered from a microorganism, Streptomyces avermitilis, isolated from an Oceanside golf course soil in japan, it was found to have potent bioactivity. It has systemic anti-parasitic activities, effective against helminths, arachnids, and insects, but not against protozoa, bacteria, flatworms or fungi (Omura, 2008) Ivermectin was the first ' endoectoparasites' discovered due its proven activity against endo-and ectoparasites , at unprecedented low doses it could be easily used orally, topically and parentally (Arena et al., 1992;Omura, 2002). Digenea simplex and Chondria armata are two Japanese red algae, which have been used for their potent anthelminthic properties for more than 1000 years. Elimination of intestinal worms, such as parasitic round worms (Ascaris lumbricoides), tape worms (Taenia spp), and wip worms (Trichuris trichura) are some of the anthelminthic properties of these Japanese algae. Domoic acid and kainic acid are two closely related compounds isolated from these red algae, which are responsible for these curative properties (Gerwick et al., 2007). Role of natural products in treating non infectious diseases Although natural products are the origin of several drugs used in the treatment of many infectious diseases, they also play a significant role in the treatment of several non-infectious disease conditions. Diabetes mellitus and obesity Hyperglycaemia is the unifying feature of this heterogeneous endocrine disorder. Every year the number of diabetic patients is rising by 4-5% (Wagman and Nuss, 2001). Plant extract and complex microbial secondary metabolites of natural products have attracted the attention of scientific world for their potential use as drugs for the treatment of chronic diseases such as Type II diabetes (Bedekar et al., 2010). Acarbose was discovered from compounds isolated from Actinomycetes species, which are potent inhibitors of digestive enzymes such as -amylase, sucrase, and maltase. Acarbose is the most widely used digestive enzyme inhibitor among the numerous antidiabetic drugs used for the treatment of Type-II diabetes (Bedekar et al., 2010). Maglitol which was derived from 1deoxynorjirimycin is one of the widely used -glucosidase inhibitors used in the treatment of Type II diabetes. Nojirimycin, deoxynojirimicin, and their derivatives are new compounds www.intechopen.com Phytochemicals -Bioactivities and Impact on Health 214 with inhibitory properties derived from various Bacillus and Streptomyces strains (Schmidt et al., 1979;Tan, 1997). Another -glucosidase inhibitor used as antidiabetic drug mostly in Asia is voglibose, which is synthesized from valiolamine isolated from fermentation broth of Streptomyces hydroscopicus subsp. Limoneus (Matsuo et al., 1992;Goke et al., 1995). Interestingly, polyphenols natural compounds such as flavonoids have demonstrated numerous health benefits, by addressing the issue of obesity and diabetes due to their digestive enzyme inhibition activity, induction of apoptosis in adipose tissue, etc. (Nelson-Dooley et al., 2005). A subgroup of flavonoids, the anthocyanins is water-soluble plant pigments responsible for the blue, purple and red coloration of many plant tissues. Anthocynanins are extracted mostly from plants or plant waste in a form of a mixture. Anthocyanidins are the aglycon forms of anthocynanins of which 17 of them are found in nature. Anthocyanins have antioxidant and antihypertensive activities, they have also demonstrated the inhibition of lipid oxidation. Anthocyanins specifically inhibitglucosidase activity and have the potential to reduce blood glucose levels after starch-rich meals. (Matsui et al., 2001a;Bedekar, 2010). Through influencing signalling molecules natural products can prevent both adult and childhood obesity. During physiological conditions such as exercise, hypoxia, the presence of reactive oxygen species (ROS), and ischemia/reperfusion activated protein kinase (AMPK) is activated, which is the master regulator of metabolic processes. Natural products also activate AMPK to reduce obesity through the regulation of fatty acid metabolism-related proteins such as acetyl-coenzyme A (CoA) carboxylase (ACC), sterol regulatory element-binding protein (SREBP), fatty acid synthase (FAS) and so forth (Hwang et al., 2011). Hypertension The hypothesis that meat is a source of peptides that are effective in preventing and reducing chronic life style-related diseases (CLSRDS) such as hypertension has been tested. Empowering hypertensive people in quality life such as offering nutritional food rich in antioxidant vitamins, and proteins or biologically active peptides, can lower blood pressure, possibly by preventing an underlying cause of the condition. Provision of these forms of functional food is useful even to the normotensive individuals nutritionally. The underlying aetiology to clinical hypertension may be due to a deficiency in proteins from meat origins, along with abnormalities in carbohydrate and fat metabolism. Proteolysis of meat muscle generate multiple number of amino acid peptides with nutrafunctional roles, the have strong angiotensin-converting enzyme inhibitory activity, which perhaps lower blood pressure (Ahmed and Muguruma, 2010). Dietary supplements promote cholesterollowering benefits, some of these supplements reported to have significant low-density lipoprotein-cholesterol (LDL-C) lowering properties are soluble plant fibre (oats, psyllium, pectin, flaxseed, barley, guar gum, cellulose, lignins, wheat bran), plant sterols, soy proteins, nuts ( almonds, pecan, walnut) and red yeast rice supplements (Nijjar et al., 2010). Analgesia and recreaction Pain is simply an undesirable physical or emotional experience. For the past 7000 years ago natural products have been used to treat pain disorders. A good example is opium poppy (Papaver soniferum) and the bark of willow tree (Salix spp.). In 19 th century some individual components of different natural products remedies were identified and purified. One of the most widely used and available compounds for the management of mild pain is Aspirin or acetylsalicylic acid derived from salicylic acid, which is extracted from Willow tree (Salix alba). Opioid is a name given to all compounds having the same mechanism of action as the constituents of opium. These are derived from opium juice from Papaver somniferum; examples of these groups of drugs include morphine, codeine and thebaine. These drugs are also use for recreational purposes apart from their use as analgesics (McCurdy and Scully, 2005). Cocaine interact with voltage-gated ion channels and blocks sodium channels which is responsible for its local anaesthetic activity. Cocaine has the ability to block the dopamine transporter due to its ability to create a euphoric state, meaning it is also use for recreational purposes (McCurdy and Scully, 2005). Caffeine is the most widely used psychoactive drug in the world, found in a number of plant sources. Coffee (Coffea arabica, native to Africa), tea (Camellia sinensis, native to China), and cacao (Theobroma cacao, native to South and Central America), from which chocolate is made. Other caffeine-containing plants include kola (Cola acuminate), guarana (Paullinia cupana), and yerba mate (Ilex paraguarienis). Theophyllines found in tea and theobromines found in cacao are other botanical methylated xanthines closely related to caffeine with psychoactive effects. Nicotine from tobacco plant ( Nicotiana tabacum, Nicotiana rustica ) and related species are other psychoactive substances. Cannabis plant is the source of cannabinoids such as marijuana and hashish which also possess psychoactive effects in form of relaxation, sedation, intensification of thoughts and feelings, alterations of perception, and increased appetite (Presti, 2003). Antipsychotics From ancient times to present in Indian ayurvedic medicine, extracts of the snakeroot plant, Rauwolfia serpentine, have been used to treat psychotic symptoms. Reserpine was isolated and identified in the 20th century from R. serpentine, and was found to cause decreases in the activity of monoaminergic neurons using the neurotransmitters dopamine, norepinephrine, and serotonin (Presti, 2003). Polygalasaponins is an extract of a plant (Polygala tenuifolia Willdenow) that has been used as antipsychotic for hundred years in Korean traditional medicine. Polygalasaponin has been shown to have dopamine and serotonin receptor antagonism properties in vivo, suggesting its possible utility as an antipsychotic agent (Chung et al., 2002). Antidepressants Saint John's wort (Hypericum perforatum) for centuries in Europe, the extract of this plant has been used for their antidepressant effects (Presti, 2003). Pharmacophores are natural products derived from chemically defended marine organisms related to serotonin or clinically utilized antidepressant drugs. Aaptamine and 5,6-dibromo-N,Ndimethyltryptamine are two marine natural products which produced significant antidepressant-like activity in the forced swim tests. In the tail suspension test antidepressant-like effects of 5,6-dibromo-N,N-dimethyltryptamine were confirmed, whereas aaptamine has not produce significant results (Diers et al., 2008). Wound healing (angiogenesis) The plant Saint John's wort when used in topical preparations facilitates wound healing. Its healing properties were mentioned in the ancient medical texts of Hippocrates, Pliny, and Galen (Presti, 2003). Honey has been shown to possessed various antimicrobial activities in addition to its wound healing effect, a good example is the manuka honey (Leptospermum scoparium). Preparations of aloe vera (Aloe barbadensis), cocoa and oak bark extracts have been used to treat various ailments especially those of the skin (Davis and Perez, 2009). Upon injury papaya and fig trees produce latex rich in proteolytic enzymes, the juices extracted from the stem or fruit of plants such as the pineapple, contain large amount of cysteine proteinases (Rowan et al., 1990). Chymopapain, is one of these enzymes used in medicine to treat intervertebral disc prolapse with a similar success rate to surgery (Smith and Brown, 1967). For burn injuries ananain and comosain are used as debriding agents (Rowan et al., 1990). Bromelain, papain, and ficin have been used to replace glucocorticoids and non-steroidal anti-rheumatics as anti-inflammatory drugs Role of natural products in preventive medicine Natural products like medicinal plants and foodstuffs are used for their preventive effects against life-style related diseases such as coronary heart diseases, hypertension, thrombosis, allergic inflammation, arteriosclerosis, diabetes and cancer, although clinical basis and experimental evidence are insufficient and unclear. However, biochemical and pharmacological study of isolated natural compounds from various medicinal plants and foodstuffs indicate that fucoidan (polysaccharides), carp oil (fatty acids) and triterpenoids inhibited the tumour growth and/or metastasis in the liver, through the inhibition of tumour induced neovascularization, in tumour-bearing mice. The inhibition of thrombin-induced adhesion molecule through protein C kinase activation inhibition has been demonstrated by baicalein (flavones) isolated from Scutellaria baicalensis roots. Furthermore, through endothelium-dependent nitric oxide production, xanthoangelol (chalcones) isolated from Angelica keiskei roots inhibited catecholamine-induced vasoconstriction (Kimura, 2005). Many plants are used in traditional medicine as active agents against various effects induced by snakebite. Baccharis trimera (Asteraceae), known in as carqueja in Brazil, has been shown to inhibit haemorrhagic and proteolytic activity caused by Bothrops snake vemons (Januario et al., 2004). Biologicals like vaccines and antisera have a great role in preventive medicine. Polyclonal antibodies are mixture of antibody specificities which all recognize the same antigen. Blood serum that contains polyclonal antibodies is known as antiserum. Polyclonal antibodies are used in medicine to confer passive immunity to certain diseases. For instance, transfusion of serum antibodies from a human survivor of Ebola virus is the only effective treatment for the viral infection. Antiserum is also used in medicine as antitoxin or antivenin, which contain antibodies specific for venom from poisonous reptiles, arachnids and insects. People who have been bitten or stung by these animals are treated with this antiserum. Vaccine is a biological preparation that improves immunity to a particular disease made from attenuated or killed forms of a microbe or its toxins. Vaccines are used for either prophylaxis or therapy against certain disease conditions such as smallpox. Smallpox vaccine was the first successful vaccine to be developed in 1796 by Edward Jenner (Stewart and Devlin, 2006).
2018-10-13T07:16:30.192Z
2011-12-22T00:00:00.000
{ "year": 2011, "sha1": "7dd13763eaf6f1f76cf0421d75f1518042702fd9", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/25499", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4dcc00f881c3f1924ed158cb2c0421fc13a4675f", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
219765080
pes2o/s2orc
v3-fos-license
Predictive Models for Elastic Bending Behavior of a Wood Composite Sandwich Panel : Strands produced from small-diameter timbers of lodgepole and ponderosa pine were used to fabricate a composite sandwich structure as a replacement for traditional building envelope materials, such as roofing. It is beneficial to develop models that are verified to predict the behavior of these sandwich structures under typical service loads. When used for building envelopes, these structural panels are subjected to bending due to wind, snow, live, and dead loads during their service life. The objective of this study was to develop a theoretical and a finite element (FE) model to evaluate the elastic bending behavior of the wood-strand composite sandwich panel with a biaxial corrugated core. The e ff ect of shear deformation was shown to be negligible by applying two theoretical models, the Euler–Bernoulli and Timoshenko beam theories. Tensile tests were conducted to obtain the material properties as inputs into the models. Predicted bending sti ff ness of the sandwich panels using Euler-Bernoulli, Timoshenko, and FE models di ff ered from the experimental results by 3.6%, 5.2%, and 6.5%, respectively. Using FE and theoretical models, a sensitivity analysis was conducted to explore the e ff ect of change in bending sti ff ness due to intrinsic variation in material properties of the wood composite material. Introduction Increasing world population and limited natural resources require us to rethink how we utilize our forests more productively to construct effective building products for houses. Forests play a critical role in sequestering carbon and building products and can further this cause by continuing to store the carbon for a prolonged period. As an order of magnitude, a tree on an average absorbs one ton of CO 2 and produces 0.7 tons of O 2 for every cubic meter of growth [1], and every cubic meter of wood as a building material can reduce CO 2 emissions by an average of 2 tons compared to other building materials such steel and concrete [2]. A sustainable forest management plan to protect forests against fire, insects, and disease should consider thinning operations that result in improving forest health. Often, this requires the removal of small-diameter timber (SDT) at a cost and requires high-value markets or high-volume usage of these low-quality logs to recover the forest treatment costs. For example, the average cost for a forest service thinning (approximately $70/dry ton) is usually more than the market value of the SDT removed (energy and chip markets pay approximately $25 to $35/dry ton) [3]. Besides existing products such as medium-density fiberboard (MDF) and oriented strand board (OSB) produced from SDT, it is worth developing new products by converting these low-quality SDTs into more robust and versatile building products that would meet the structural performance as well as energy requirements. Sandwich structures are widely used in many areas like aerospace, automotive, civil, building construction, and marine industries because their clever construction reduces their weight while increasing their mechanical performance [4]. However, those with hollow cores have attracted interest from researchers because the hollow geometry of the core can be used to improve the thermal performance when they are filled with appropriate materials, such as closed-cell foam. This idea has also been used to develop wood-based sandwich panels with different 3D core geometries [5][6][7][8][9][10]. Some of the disadvantages of these panels is that they use a wet forming process for panel manufacturing, have a relatively low structural capacity, and have poor interfacial shear strength at the intersection between the core and face plies resulting in poor structural performance. Voth et al. [11] designed a 3D core with biaxial corrugated geometry and used wood strands produced from SDT to fabricate the panel shown in Figure 1. The bending behavior of wood-strand-based sandwich panels with biaxial corrugated core geometry was investigated experimentally, and the results showed the stiffness in both directions along the length and width is higher than that of oriented strand board (OSB) and 5-ply plywood. Additionally, sandwich panels with cavities filled with insulation foam decreased the thermal conductivity of the panels by over 17% while improving the panel strength and stiffness by 34 and 16 percent [12]. Creep [13] and impact [14] performances of the biaxial corrugated core sandwich panel were also investigated experimentally. Forests 2020, 11, x FOR PEER REVIEW 3 of 16 then bonded with a two-part epoxy resin (Loctite Epoxy by Henkel) to the biaxial corrugated cores. A 400-grit sandpaper was used for a light sanding of the bonding area of both the flat and corrugated panels. A thin layer of resin was only applied to both sides of the corrugated core. Flat panels were placed on both sides of the corrugated core to form a sandwich panel as shown in Figure 1g, and the panels were clamped for 24 hours before cutting and trimming. A unit cell (UC), the simplest repeating element of the biaxial corrugated core geometry, along with its dimensions, is given in Figure 2a. Directions along the length and width of the panel are defined as longitudinal and transverse directions. Wood strands were oriented along the longitudinal direction to make the preform for both the corrugated core and the flat panels. For application of these sandwich panels as a structural component in building envelopes (roofs, floors, and walls) with longer spans, the bending stiffness and the bond area between the core and the faces need to be increased as they typically limit the load-carrying capacity. Models to predict their behavior are useful tools to design and engineer the geometry of cores and sandwich panels as long as they are verified and validated. Additionally, it is critical to accurately determine the material properties that serve as inputs to these models. The finite element method is an effective way to simulate complicated geometries under different loading and boundary conditions to predict the behavior and explore distribution of stress and strain in the structures. It is also useful to develop theoretical models to derive closed-form solutions to understand the influence of material properties and geometrical features on the behavior of the structures. These models can then be used to analyze the behavior of structural components subjected to service loads. In this study, a finite element model and a theoretical model were developed to evaluate the bending behavior of a wood-strand composite sandwich panel with a biaxial corrugated core geometry designed by Voth [11]. Elastic constants of the wood-strand composite material required to develop both the finite element (FE) and the theoretical models were evaluated, and the models were verified against the experimental results. Considering the intrinsic nature of wood, the models were further applied in a sensitivity analysis to understand the influence of variation in elastic constants on the bending stiffness of the sandwich panel. Materials Using a disc-strander (manufactured by CAE) operating at a rotational speed of 500 rpm (shown in Figure 1a), thin wood strands with an average thickness of 0.36 mm were produced from ponderosa pine (pp) logs ranging in diameter from 191 to 311 mm. Wood strands were then dried to a target moisture content of 3%-5% and sprayed with an aerosolized liquid phenol formaldehyde (PF) resin in a drum blender (shown in Figure 1b) to a target resin content of 8% of the oven-dry weight of the wood strands. Subsequently, wood strands sprayed with resin were oriented and hand-formed unidirectionally using a forming box (shown in Figure 1c) to fabricate a wood strand mat, or preform, as shown in Figure 1d. Unidirectional mats were consolidated to a thickness of 6.35 mm into flat panels for the outer plies or corrugated core panels (shown in Figure 1f) with a matched-die mold (shown in Figure 1e) in a hot press. For both flat and corrugated panels, the preform was hot-pressed for 6 min at an operating temperature of 160 • C to reach a target thickness of 6.35 mm. Applied pressure by the press was an uncontrolled variable and was dependent on the density and the target thickness of the panel. To fabricate the sandwich panels as shown in Figure 1f-h, flat panels were then bonded with a two-part epoxy resin (Loctite Epoxy by Henkel) to the biaxial corrugated cores. A 400-grit sandpaper was used for a light sanding of the bonding area of both the flat and corrugated panels. A thin layer of resin was only applied to both sides of the corrugated core. Flat panels were placed on both sides of the corrugated core to form a sandwich panel as shown in Figure 1g, and the panels were clamped for 24 h before cutting and trimming. A unit cell (UC), the simplest repeating element of the biaxial corrugated core geometry, along with its dimensions, is given in Figure 2a. Directions along the length and width of the panel are defined as longitudinal and transverse directions. Wood strands were oriented along the longitudinal direction to make the preform for both the corrugated core and the flat panels. Forests 2020, 11, x FOR PEER REVIEW 3 of 16 then bonded with a two-part epoxy resin (Loctite Epoxy by Henkel) to the biaxial corrugated cores. A 400-grit sandpaper was used for a light sanding of the bonding area of both the flat and corrugated panels. A thin layer of resin was only applied to both sides of the corrugated core. Flat panels were placed on both sides of the corrugated core to form a sandwich panel as shown in Figure 1g, and the panels were clamped for 24 hours before cutting and trimming. A unit cell (UC), the simplest repeating element of the biaxial corrugated core geometry, along with its dimensions, is given in Figure 2a. Directions along the length and width of the panel are defined as longitudinal and transverse directions. Wood strands were oriented along the longitudinal direction to make the preform for both the corrugated core and the flat panels. L1, L2, and h are the length, width and height of the UC. Core wall thickness was defined as t. Y21, Y22, and X1 are associated with the dimensions of the bonding area between the flat panels and the core to fabricate a sandwich panel. The angle of slanted areas is represented by θ1 and θ2 as shown in Figure 2b. Experiments This section is divided into two subsections. The first introduces the experiments conducted to establish material properties employed in the FE and theoretical models. The second part describes L 1 , L 2 , and h are the length, width and height of the UC. Core wall thickness was defined as t. Y 21 , Y 22 , and X 1 are associated with the dimensions of the bonding area between the flat panels and the core to fabricate a sandwich panel. The angle of slanted areas is represented by θ 1 and θ 2 as shown in Figure 2b. Experiments This section is divided into two subsections. The first introduces the experiments conducted to establish material properties employed in the FE and theoretical models. The second part describes the experimental bending tests performed to verify the sandwich panel bending stiffness predictions of the FE and theoretical models. Tensile Test Wood is an orthotropic material and requires nine elastic constants to be fully defined [15]. However, because the fiber orientation in individual strands varies in the width direction with respect to an idealized orthotropic material axes and the preform is a conglomeration of wood strands oriented uniaxially, transverse isotropy was assumed to define the material properties of both flat plies and the core [16]. Therefore, five elastic constants, including E 1 , E 2 = E 3 , G 12 = G 23 , ν 12 = ν 13 , and ν 23 , were required to fully define the sandwich panel. Subscripts 1, 2, and 3 refer to the material axes (local coordinate system) of the composite material ( Figure 2b). The 2-3 plane is the plane of isotropy for this transversely isotropic material. To determine these representative material properties, tension specimens shown in Figure 3a were cut from flat panels at different angles (0 • , 15 • , and 90 • ) with respect to the longitudinal axis (x axis in Figure 2) and tested ( Figure 3b) as per the standard testing guidelines in ASTM D1037 [17]. Testing was performed using an 8.8-kN Instron test frame Model 4466, change in gauge length (elongation) was recorded using a 25-mm Epsilon extensometer, and then strain measurements were obtained by dividing the elongation by the initial gauge length (25 mm). Tensile Test Wood is an orthotropic material and requires nine elastic constants to be fully defined [15]. However, because the fiber orientation in individual strands varies in the width direction with respect to an idealized orthotropic material axes and the preform is a conglomeration of wood strands oriented uniaxially, transverse isotropy was assumed to define the material properties of both flat plies and the core [16]. Therefore, five elastic constants, including E1, E2 = E3, G12 = G23, ν12 = ν13, and ν23, were required to fully define the sandwich panel. Subscripts 1, 2, and 3 refer to the material axes (local coordinate system) of the composite material ( Figure 2b). The 2-3 plane is the plane of isotropy for this transversely isotropic material. To determine these representative material properties, tension specimens shown in Figure 3a were cut from flat panels at different angles (0°, 15°, and 90°) with respect to the longitudinal axis (x axis in Figure 2) and tested ( Figure 3b) as per the standard testing guidelines in ASTM D1037 [17]. Testing was performed using an 8.8-kN Instron test frame Model 4466, change in gauge length (elongation) was recorded using a 25-mm Epsilon extensometer, and then strain measurements were obtained by dividing the elongation by the initial gauge length (25 mm). Longitudinal and transverse Young modulus (E1 and E2) were obtained from test results of 0° and 90° specimens, respectively. These material properties, along with the test results of coupons prepared at 15° angles, were used to determine the shear modulus, G12, using the transformation equation [15]. where E1, E2, Eθ, υ12, and θ are modulus of elasticity in the longitudinal, transverse, and θ-directions, Poisson's ratio, and assumed angle in this study (15°), respectively. The value for Poisson's ratio, υ12, was assumed to be 0.358, based on a previous study [18] which evaluated the mechanical properties of a wood strand composite material with similar structure and manufacturing process to the composite material used in this study. Among commercially produced wood-strand-based materials, Longitudinal and transverse Young modulus (E 1 and E 2 ) were obtained from test results of 0 • and 90 • specimens, respectively. These material properties, along with the test results of coupons prepared at 15 • angles, were used to determine the shear modulus, G 12 , using the transformation equation [15]. where E 1 , E 2 , E θ , υ 12 , and θ are modulus of elasticity in the longitudinal, transverse, and θ-directions, Poisson's ratio, and assumed angle in this study (15 • ), respectively. The value for Poisson's ratio, υ 12 , was assumed to be 0.358, based on a previous study [18] which evaluated the mechanical properties of a wood strand composite material with similar structure and manufacturing process to the composite material used in this study. Among commercially produced wood-strand-based materials, structural composite lumber (SCL) [e.g., parallel strand lumber (PSL), oriented strand lumber (OSL), and laminated strand lumber (LSL)] are similar to the material in this study. They are manufactured using long veneer strips or wood strands that are also aligned in a longitudinal direction. Therefore, Poisson's ratio, υ 23 , of PSL as determined by Clouston [19] was assumed for the material of interest in this study. Considering the transverse isotropy assumption resulting in material isotropy in plane 2-3, G 23 was calculated as: Since wood is a natural material influenced by variations in growing conditions, its properties are influenced by naturally occurring growth characteristics, such as knots, variation in cell wall thickness between the annual rings, and grain deviations [20]. This inherent variability can significantly influence the properties of wood strands produced from logs or lumber, and this variation in turn can affect the properties of the composite material produced using these strands. Furthermore, variations in the mat architecture can also be expected due to variation in the orientation of the wood strands with respect to the longitudinal axis as the mat is formed. In addition, mat architecture would also influence the heat and mass transfer during the hot-pressing process due to variations in moisture and heat distribution within a mat [21]. Effort was made to reduce these variations by tightly controlling the processing variables and the press schedule during the production process. Yet, variation in density within a panel and between the panels after consolidation is inevitable and should be expected. As mentioned, this variation in density can affect the material properties obtained from tensile test. To explore the influence of density on the longitudinal modulus, panels with density varying between 600 and 950 kg/m 3 were fabricated and submitted to tensile test. Flexural Test To measure the bending stiffness, a four-point bending test was conducted as per ASTM D7249 [22] on corrugated core and sandwich panel specimens, as shown in Figure 3c,d, respectively. The bending specimens were cut in the longitudinal direction of the panel with the width of one UC. As per the standards, five specimens of each panel were tested. The bending stiffness (EI) was determined using Equation (3) and load-deflection results obtained from the bending test. where P, ∆, L, and a are bending load, deflection at mid-span, span length, and distance of loading point from the support, respectively. The configuration of the bending test is shown in Figure 4. Table 1 summarizes the dimensions of all flexural test specimens. Note that the specimen length in Table 1 indicates the span length and does not include a 25.5-mm overhang at each end of the flexural specimens (i.e., total length of these specimens was 51 mm longer than span length). Deflection was measured using a ±25-mm linear variable differential transformer (LVDT), located at the mid-span of the test specimens. Finite Element Model To evaluate the bending behavior of both the sandwich panel and the corrugated core within the elastic region, FE models were developed using Abaqus finite element software ( Figure 4). For both the flat outer layers and the core, shell elements (S4R) with hourglass control and a reduced integration rule were employed. To easily assign material orientation to the core with complex geometry, a shell element was chosen. Since this type of element considerably decreases the Finite Element Model To evaluate the bending behavior of both the sandwich panel and the corrugated core within the elastic region, FE models were developed using Abaqus finite element software (Figure 4). For both the flat outer layers and the core, shell elements (S4R) with hourglass control and a reduced integration rule were employed. To easily assign material orientation to the core with complex geometry, a shell element was chosen. Since this type of element considerably decreases the Sandwich Panel 559 108 -38 Finite Element Model To evaluate the bending behavior of both the sandwich panel and the corrugated core within the elastic region, FE models were developed using Abaqus finite element software ( Figure 4). For both the flat outer layers and the core, shell elements (S4R) with hourglass control and a reduced integration rule were employed. To easily assign material orientation to the core with complex geometry, a shell element was chosen. Since this type of element considerably decreases the simulation and running time of the FE model, a finer mesh was adopted to increase the accuracy. In addition, all elements were given the capability to undergo finite strains and rotations. Considering the fabrication process, both the corrugated core and the flat outer plies were assumed to be transversely isotropic. To simulate a perfect bond between the face-sheets and the 3-D core, rigid links were modeled between the nodes of these two components using a tie constraint. As for the boundary conditions, the nodes in the contact area between the specimens and the supports were constrained in the z direction. Since the supports were free to rotate about the y axis to be consistent with ASTM D7249 [22], no other boundary condition was applied. However, to avoid instability in the structure, the centerline of the sandwich panel exactly at the mid-span was fixed to avoid any translation in the x direction. Loading was applied as prescribed downward deflection of the loading heads. To determine an acceptable element size, a mesh convergence analysis was performed on the flexural 3-D core model. In Figure 5, the results of this convergence study are displayed, comparing the bending load that corresponds to a deflection of 25.4 mm in the center of the specimen to the number of elements. As the number of elements increased from 4263 to 59,814 (corresponding to an element size of 5 mm to 1.3 mm), there was a 1.73% change in the resulting bending load. Because of this negligible change in the bending load and noticeable savings in the computation time, the smallest number of elements considered, 4263 (element size of 5 mm), was chosen to mesh the specimens. Finite Element Model To evaluate the bending behavior of both the sandwich panel and the corrugated core within the elastic region, FE models were developed using Abaqus finite element software ( Figure 4). For both the flat outer layers and the core, shell elements (S4R) with hourglass control and a reduced integration rule were employed. To easily assign material orientation to the core with complex geometry, a shell element was chosen. Since this type of element considerably decreases the simulation and running time of the FE model, a finer mesh was adopted to increase the accuracy. In addition, all elements were given the capability to undergo finite strains and rotations. Considering the fabrication process, both the corrugated core and the flat outer plies were assumed to be transversely isotropic. To simulate a perfect bond between the face-sheets and the 3-D core, rigid links were modeled between the nodes of these two components using a tie constraint. As for the boundary conditions, the nodes in the contact area between the specimens and the supports were constrained in the z direction. Since the supports were free to rotate about the y axis to be consistent with ASTM D7249 [22], no other boundary condition was applied. However, to avoid instability in the structure, the centerline of the sandwich panel exactly at the mid-span was fixed to avoid any translation in the x direction. Loading was applied as prescribed downward deflection of the loading heads. To determine an acceptable element size, a mesh convergence analysis was performed on the flexural 3-D core model. In Figure 5, the results of this convergence study are displayed, comparing the bending load that corresponds to a deflection of 25.4 mm in the center of the specimen to the number of elements. As the number of elements increased from 4263 to 59,814 (corresponding to an element size of 5 mm to 1.3 mm), there was a 1.73% change in the resulting bending load. Because of this negligible change in the bending load and noticeable savings in the computation time, the smallest number of elements considered, 4263 (element size of 5 mm), was chosen to mesh the specimens. Theoretical Model Due to the complex geometry of a biaxially corrugated core with cavities caused by threedimensional geometry, it is not easy to develop a theoretical model to capture all deformations under different types of loading. One method to overcome this difficulty is homogenization. In this section, Theoretical Model Due to the complex geometry of a biaxially corrugated core with cavities caused by three-dimensional geometry, it is not easy to develop a theoretical model to capture all deformations under different types of loading. One method to overcome this difficulty is homogenization. In this section, the inhomogeneous sandwich panel with corrugated core geometry is replaced with an equivalent homogenous and continuous layer. The effective properties for this homogenous and continuous layer consist of effective extensional moduli in the x and y directions, effective Poisson's ratio, and effective shear modulus in the x-y plane. Because it is a common practice to compute these effective properties with only consideration of in-plane loading [23][24][25] where A, B, and D are the extensional stiffness, the bending-extension coupling, and the bending stiffness matrices, respectively, and their components are defined as The symmetric and balanced configuration of the sandwich panel about its mid-plane results in a zero bending-extension coupling matrix, [B] (see Appendix A). In addition, because the sandwich panel was assumed to be a simplified 2-D structure and evaluated like a plate, all stresses through the thickness were neglected. Therefore, the constitutive equation for the sandwich panel is defined [23] as and the components of the extensional stiffness matrix ([A]) given in Equation (2) expanded over three layers can be rewritten as where Q ij are components of the stiffness matrix in the global coordinate system (x-y-z) [26]. Additionally, h f is the thickness of the outer layers and h c is the height of the corrugated core. For uniaxial corrugated cores, such as those with sinusoidal or trapezoidal configurations where the corrugated geometries can be easily specified with a known function, computing these integrals over the height to calculate the extensional stiffness matrix ([A]) is straightforward [27][28][29]. However, since the geometry of the core analyzed in this study is biaxially corrugated and varies along both the x and y axes, the second term in Equation (7) representing the core layer cannot be easily computed. Unlike other methods which use a known function describing the core geometry to compute Equations (5) and (7), a discretization technique was used to simplify the integration in this study. Therefore, one quarter of the UC (Figure 2b) was broken down into seven simplified domains as shown in Figure 2b. Subsequently, integration was carried out over the simplified domains and averaged as where L 1 and L 2 are dimensions of the unit cell, and V k is volume of each domain shown in Figure 2b. It should be noted that the components of the global stiffness matrix (Q ij ) of the core layer given in Equations (7) and (8) (30) For #2 and 6 : T = and inverting the extensional stiffness matrix ([A]) in Equation (6) results in The stress-strain relation for a homogenous and continuous lamina with the thickness of h is expressed as and comparing Equation (10) for the biaxial corrugated core sandwich panel with Equation (11) for a lamina gives the effective material properties of a lamina that is equivalent to the biaxial corrugated core sandwich panel. These material properties are summarized [23] as E x = 1/ha 11 , E y = 1/ha 22 , G xy = 1/ha 66 , ν xy = −a 12 /a 11 , ν yx = −a 12 /a 22 (12) Forests 2020, 11, 624 9 of 16 Considering these effective material properties, two different beam models, classical beam theory (Euler-Bernoulli) and first-order shear deformation beam theory (Timoshenko), were employed to investigate the bending behavior of the equivalent structure. Using displacement fields of these beam models, the principle of minimum potential energy, and the variational method, the governing equations for this sandwich beam [30,31] were derived as Euler-Bernoulli : Timoshenko : where beam deflection and rotation of the cross section about the y axis with respect to the thickness direction are shown with w and φ respectively. Additionally, Q 11 and Q 55 , k s , A, and I, which are components of the stiffness matrix of the homogenized beam, shear correction factor, cross section area, and moment of inertia, respectively, are given as 12 bh 3 , A = bh, k s = 5 + 5ν xy / 6 + 5ν xy As shown in Table 1, b and h are the width and the height of the sandwich panel test specimen and the material properties are given in Equation (12). It should be noted that effective out-of-plane shear modulus (G 13 ) is assumed equal to that of in-plane shear modulus (G 12 ) [23]. Since Fourier series expansions satisfy the boundary conditions [32], they were used to solve the governing equation(s). Based on Euler-Bernoulli and Timoshenko beam theories, the closed form solution for the deflection of this sandwich beam with biaxial corrugated geometry under a four-point bending test, as shown in Figure 4, is obtained as: Results and Discussion Results of tensile testing were used to obtain the elastic constants of the wood composite material. Using these properties, FE and theoretical models to investigate the bending behavior were applied and their results were compared with experimental results. Elastic Constants To establish the properties of the wood composite material, 19 coupons were tested at each angle (0 • , 15 • , and 90 • ). A typical stress-strain curve, along with the failure mode, of a tensile coupon is presented in Figure 6. Since the stress-strain curve is almost linear, as proven by the regression model (red dotted line) and the coefficient of determination (R 2 ), material nonlinearity for this wood composite sandwich panel can be ignored. Density plays an important role in affecting stress-strain curve and load-carrying capacity of this composite material. Therefore, test results of those coupons with a density of around 640 (kg/m 3 ) were chosen to obtain the material properties. This avoided any variation caused by the density. All panels were manufactured to a target density of 640 kg/m 3 in this study, since this density has been used for similar wood-strand products [11][12][13][14]18] and also is a typical average density for commercially available OSB [20]. These material properties are summarized in Table 2 and coefficients of variation in percent are presented in parentheses. E1 (GPa) E2 (GPa) G12 (GPa) (kg/m 3 ) 9.80 (9.4%) 1.71 (13.4%) 2.56 (37.8%) 640 Variation in material properties is to be expected in all heterogeneous and anisotropic materials, as can be seen in coefficient of variation values in Table 2. As explained, this is especially true for wood. For wood-based products, variation in density can affect the material properties. The influence of the density on longitudinal Young modulus of the wood strand plies tested in this study is shown in Figure 7. Results indicate that a 46% change in density caused a 107% change in longitudinal Young's modulus. Considering the material properties obtained from tensile testing and the assumption of transverse isotropy, the material properties of the wood strand composite are summarized in Table 3. These elastic constants served as inputs for the FE model as well as for the theoretical model to compute the stiffness matrix in Equation (7). Variation in material properties is to be expected in all heterogeneous and anisotropic materials, as can be seen in coefficient of variation values in Table 2. As explained, this is especially true for wood. For wood-based products, variation in density can affect the material properties. The influence of the density on longitudinal Young modulus of the wood strand plies tested in this study is shown in Figure 7. Results indicate that a 46% change in density caused a 107% change in longitudinal Young's modulus. E1 (GPa) E2 (GPa) G12 (GPa) (kg/m 3 ) 9.80 (9.4%) 1.71 (13.4%) 2.56 (37.8%) 640 Variation in material properties is to be expected in all heterogeneous and anisotropic materials, as can be seen in coefficient of variation values in Table 2. As explained, this is especially true for wood. For wood-based products, variation in density can affect the material properties. The influence of the density on longitudinal Young modulus of the wood strand plies tested in this study is shown in Figure 7. Results indicate that a 46% change in density caused a 107% change in longitudinal Young's modulus. Considering the material properties obtained from tensile testing and the assumption of transverse isotropy, the material properties of the wood strand composite are summarized in Table 3. These elastic constants served as inputs for the FE model as well as for the theoretical model to compute the stiffness matrix in Equation (7). Considering the material properties obtained from tensile testing and the assumption of transverse isotropy, the material properties of the wood strand composite are summarized in Table 3. These elastic constants served as inputs for the FE model as well as for the theoretical model to compute the stiffness matrix in Equation (7). Bending Behavior When subjected to a four-point bending load, the corrugated core specimens in the longitudinal direction failed due to tension, as shown in Figure 8a. Comparison between the average of the experimental results and FE results for deflection in the middle of the specimens subjected to bending is shown in Figure 8b. Since the FE simulation was developed to capture the bending behavior within the linear region, there is a close agreement between these two models in the linear elastic region. The difference between experimental and FE bending stiffness, calculated using Equation (3), within the linear elastic region is 1.4%. Bending Behavior When subjected to a four-point bending load, the corrugated core specimens in the longitudinal direction failed due to tension, as shown in Figure 8a. Comparison between the average of the experimental results and FE results for deflection in the middle of the specimens subjected to bending is shown in Figure 8b. Since the FE simulation was developed to capture the bending behavior within the linear region, there is a close agreement between these two models in the linear elastic region. The difference between experimental and FE bending stiffness, calculated using Equation (3), within the linear elastic region is 1.4%. Sandwich specimens were also subjected to four-point bending, and a typical load-deflection curve and failure mode are presented in Figure 9. Unlike the corrugated core samples, the initial failure mode of longitudinal sandwich panels was debonding between the core and face-sheets ( Figure 9b) due to interfacial shear. Bending load versus deflection at the midpoint of these sandwich panels for theoretical and FE models were compared against the average experimental results in Figure 10a. Based on the linear region of these load-deflection curves and Equation (3), the bending stiffness of sandwich beams for different models was calculated and compared in Figure 10b. For experimental results, the bending stiffness was calculated based on the results between 20%-50% of maximum load (i.e., the portion of the curve between about 1000N and 2200N in Figure 10a). The difference between experimental and theoretical bending stiffness within the linear elastic region are 3.6% and 5.2% for sandwich beams based on Euler-Bernoulli and Timoshenko models, respectively. Assumptions in the theoretical models could be the sources of error when compared against the experimental results. These assumptions include considering the composite panel as a beam instead of a plate to develop the theoretical model; this results in neglecting any deformation Sandwich specimens were also subjected to four-point bending, and a typical load-deflection curve and failure mode are presented in Figure 9. Unlike the corrugated core samples, the initial failure mode of longitudinal sandwich panels was debonding between the core and face-sheets (Figure 9b) due to interfacial shear. Bending load versus deflection at the midpoint of these sandwich panels for theoretical and FE models were compared against the average experimental results in Figure 10a. Based on the linear region of these load-deflection curves and Equation (3), the bending stiffness of sandwich beams for different models was calculated and compared in Figure 10b. For experimental results, the bending stiffness was calculated based on the results between 20%-50% of maximum load (i.e., the portion of the curve between about 1000 N and 2200 N in Figure 10a). Bending Behavior When subjected to a four-point bending load, the corrugated core specimens in the longitudinal direction failed due to tension, as shown in Figure 8a. Comparison between the average of the experimental results and FE results for deflection in the middle of the specimens subjected to bending is shown in Figure 8b. Since the FE simulation was developed to capture the bending behavior within the linear region, there is a close agreement between these two models in the linear elastic region. The difference between experimental and FE bending stiffness, calculated using Equation (3), within the linear elastic region is 1.4%. Sandwich specimens were also subjected to four-point bending, and a typical load-deflection curve and failure mode are presented in Figure 9. Unlike the corrugated core samples, the initial failure mode of longitudinal sandwich panels was debonding between the core and face-sheets (Figure 9b) due to interfacial shear. Bending load versus deflection at the midpoint of these sandwich panels for theoretical and FE models were compared against the average experimental results in Figure 10a. Based on the linear region of these load-deflection curves and Equation (3), the bending stiffness of sandwich beams for different models was calculated and compared in Figure 10b. For experimental results, the bending stiffness was calculated based on the results between 20%-50% of maximum load (i.e., the portion of the curve between about 1000N and 2200N in Figure 10a). The difference between experimental and theoretical bending stiffness within the linear elastic region are 3.6% and 5.2% for sandwich beams based on Euler-Bernoulli and Timoshenko models, respectively. Assumptions in the theoretical models could be the sources of error when compared against the experimental results. These assumptions include considering the composite panel as a beam instead of a plate to develop the theoretical model; this results in neglecting any deformation beam model, which does not consider this deformation, is not noticeable within the elastic region. Neglecting shear and normal stresses to compute effective properties as explained can be an explanation for this small difference. However, this difference can be noticeable if these two theories are used to model non-linear behavior. As the theoretical results obtained from both models are generally conservative compared to the experimental ones, the theoretical model can be a good predictive model to use in design of new geometry; however, it is limited to predicting the elastic behavior. Bending stiffness predicted by the FE model differs by 6.5% compared to the experimental results. This slightly larger difference compared to the theoretical bending stiffness could be due to the rigid link used for modeling the bonding area between the core and outer layers. Failure initiation for sandwich specimens started at the bonding area, which results in the failure mode of delamination in that area, as shown in Figure 9b. However, this bonding area was not modeled in the FE simulation and the rigid link, which does not undergo any deformation, was used to bond the core to face-sheets. Therefore, the FE model experiences less deformation compared to the actual one, and results in a higher bending stiffness. The deflected shapes of the FE simulation of the corrugated core and sandwich panel subjected to a four-point bending test, which were used to obtain the results presented in Figures 8b and 10, are shown in Figure 11. The difference between experimental and theoretical bending stiffness within the linear elastic region are 3.6% and 5.2% for sandwich beams based on Euler-Bernoulli and Timoshenko models, respectively. Assumptions in the theoretical models could be the sources of error when compared against the experimental results. These assumptions include considering the composite panel as a beam instead of a plate to develop the theoretical model; this results in neglecting any deformation in the y direction, and ignoring both shear and normal stresses through the thickness by assuming the sandwich panel as a simplified 2-D structure to compute the effective properties. The difference between the Timoshenko beam model, which considers shear deformation, and the Euler-Bernoulli beam model, which does not consider this deformation, is not noticeable within the elastic region. Neglecting shear and normal stresses to compute effective properties as explained can be an explanation for this small difference. However, this difference can be noticeable if these two theories are used to model non-linear behavior. As the theoretical results obtained from both models are generally conservative compared to the experimental ones, the theoretical model can be a good predictive model to use in design of new geometry; however, it is limited to predicting the elastic behavior. Bending stiffness predicted by the FE model differs by 6.5% compared to the experimental results. This slightly larger difference compared to the theoretical bending stiffness could be due to the rigid link used for modeling the bonding area between the core and outer layers. Failure initiation for sandwich specimens started at the bonding area, which results in the failure mode of delamination in that area, as shown in Figure 9b. However, this bonding area was not modeled in the FE simulation and the rigid link, which does not undergo any deformation, was used to bond the core to face-sheets. Therefore, the FE model experiences less deformation compared to the actual one, and results in a higher bending stiffness. The deflected shapes of the FE simulation of the corrugated core and sandwich panel subjected to a four-point bending test, which were used to obtain the results presented in Figures 8b and 10, are shown in Figure 11. the sandwich panel as a simplified 2-D structure to compute the effective properties. The difference between the Timoshenko beam model, which considers shear deformation, and the Euler-Bernoulli beam model, which does not consider this deformation, is not noticeable within the elastic region. Neglecting shear and normal stresses to compute effective properties as explained can be an explanation for this small difference. However, this difference can be noticeable if these two theories are used to model non-linear behavior. As the theoretical results obtained from both models are generally conservative compared to the experimental ones, the theoretical model can be a good predictive model to use in design of new geometry; however, it is limited to predicting the elastic behavior. Bending stiffness predicted by the FE model differs by 6.5% compared to the experimental results. This slightly larger difference compared to the theoretical bending stiffness could be due to the rigid link used for modeling the bonding area between the core and outer layers. Failure initiation for sandwich specimens started at the bonding area, which results in the failure mode of delamination in that area, as shown in Figure 9b. However, this bonding area was not modeled in the FE simulation and the rigid link, which does not undergo any deformation, was used to bond the core to face-sheets. Therefore, the FE model experiences less deformation compared to the actual one, and results in a higher bending stiffness. The deflected shapes of the FE simulation of the corrugated core and sandwich panel subjected to a four-point bending test, which were used to obtain the results presented in Figures 8b and 10, are shown in Figure 11. Another explanation for the difference between model results and the experimental results could be the variation in material properties. Material properties shown in Tables 2 and 3, which were used to develop both the theoretical and FE models, were established considering the tensile test results of coupons with a density of 640 kg/m 3 . However, since there is always a variation between the target density that a panel is pressed to and the actual density, the bending behavior from panel to panel and even from specimen to specimen within a same panel could vary. Therefore, predictions of bending stiffness by these two models could be expected to vary from the experimental results, especially in the case of wood-based materials. Sensitivity Analysis Intrinsic variation in material properties of wood strands, inconsistencies in the fabrication process, and variations in density may influence the material properties of the wood strand composite panels, as shown in the results so far. Even with extreme care during the fabrication process and the testing, there still will be noticeable variation in the material properties determined (reflected by the coefficient of variations in Table 2). Such a variation leads to variation in the bending stiffness of the sandwich panel and its components. In this section, a sensitivity analysis is presented to clarify the extent of variation in bending stiffness that could be expected due to specified variation in material properties. To this end, using the FE simulation and theoretical model, one material property was changed while others were held constant, and then change in the bending stiffness of the sandwich specimen due to change in the material property was obtained. The coefficient of variation (COV) for each property listed in Table 2 was used as a guide to vary a material property to perform the sensitivity analysis. Results of the sensitivity analysis are given in Table 4. Based on COVs listed in Table 2, e.g., the longitudinal Young's modulus (E 1 ) was varied by ±9.4%, and the bending stiffness was computed using FE and theoretical models. A 9.4% increase in E 1 indicates an increase of 8.5%, 9.3%, and 9.2% in FE, Euler-Bernoulli, and Timoshenko bending stiffness, respectively. Meanwhile, a 9.4% reduction in E 1 shows a decrease of 8.7%, 9.2%, and 9.1% in FE, Euler-Bernoulli, and Timoshenko bending stiffness. However, a higher change in E 2 , ±13.4%, creates less than ±1% change in both FE and the theoretical bending stiffness. Theoretical bending stiffness is not as sensitive as that of FE to change in shear modules because the effects of shear stresses and deformation through the thickness of the panel to calculate effective properties, which were used to develop the theoretical model, were neglected. Conclusions Theoretical and finite element models were developed and applied to predict the linear flexural behavior of the sandwich panel with a complex three-dimensional core geometry manufactured from small-diameter timber under a four-point bending load. The material properties were measured to input into the models. The bending stiffness of just the corrugated core specimens obtained by the FE model showed a 1.4% difference from the average of the experimental results. In case of the sandwich panel, the average experimental bending stiffness differed from Euler-Bernoulli, Timoshenko, and FE predictions by 3.6%, 5.2%, and 6.5%, respectively. The results indicated that a 46% increase in material density increases the longitudinal Young's modulus by as much as 107%. A sensitivity analysis to understand the effect of variations in material properties on composite sandwich bending stiffness revealed that the longitudinal Young's modulus is the most important property that influences the bending stiffness. Theoretical and FE models developed for the wood strand sandwich panel were utilized to conduct a parametric study [33] to understand the influence of the core geometry on the bending stiffness of the sandwich panel. Based on the results, new core geometry with higher performance was designed to meet the structural performance requirements of building envelope components, such as wall, floor, and roof. Theoretical models for the sandwich structure with new core geometry were developed for beam [34] and plate [35] configurations. Conflicts of Interest: The authors declare no conflict of interest. (5) can be written as Components of bending-extension coupling matrix ([B]) given in Equation where h, h f , and h c are thickness of the sandwich panel, face-sheets, and core, respectively. Top and bottom subscripts refer to the top and bottom face-sheets. The first and third terms in Equation (A1) (top and bottom) are zero because of the symmetric and balanced configuration of the face-sheets and can be written as Subscripts 1 to 7 represent the segmented parts of the corrugated core as shown in Figure 2b, which are used as the domain for integration. Because of the symmetric and balanced configuration of UC, parts 1 and 7, parts 2 and 6, and parts 3 and 5 cancel out each other. Part 4 is zero on its own because of its symmetric geometry about the mid plane. Therefore, the bending-extension coupling matrix (matrix [B]) for this sandwich panel is zero.
2020-06-04T09:12:28.471Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "278c2cda1713de44a19a8609bf7509a8eec37318", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/11/6/624/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "33b8171b13779ee325b0a48802b0f691f28ab581", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
18758719
pes2o/s2orc
v3-fos-license
Silencing of EEF2K (eukaryotic elongation factor-2 kinase) reveals AMPK-ULK1-dependent autophagy in colon cancer cells EEF2K (eukaryotic elongation factor-2 kinase), also known as Ca2+/calmodulin-dependent protein kinase III, functions in downregulating peptide chain elongation through inactivation of EEF2 (eukaryotic translation elongation factor 2). Currently, there is a limited amount of information on the promotion of autophagic survival by EEF2K in breast and glioblastoma cell lines. However, the precise role of EEF2K in carcinogenesis as well as the underlying mechanism involved is still poorly understood. In this study, contrary to the reported autophagy-promoting activity of EEF2K in certain cancer cells, EEF2K is shown to negatively regulate autophagy in human colon cancer cells as indicated by the increase of LC3-II levels, the accumulation of LC3 dots per cell, and the promotion of autophagic flux in EEF2K knockdown cells. EEF2K negatively regulates cell viability, clonogenicity, cell proliferation, and cell size in colon cancer cells. Autophagy induced by EEF2K silencing promotes cell survival and does not potentiate the anticancer efficacy of the AKT inhibitor MK-2206. In addition, autophagy induced by silencing of EEF2K is attributed to induction of protein synthesis and activation of the AMPK-ULK1 pathway, independent of the suppression of MTOR activity and ROS generation. Knockdown of AMPK or ULK1 significantly abrogates EEF2K silencing-induced increase of LC3-II levels, accumulation of LC3 dots per cell as well as cell proliferation in colon cancer cells. In conclusion, silencing of EEF2K promotes autophagic survival via activation of the AMPK-ULK1 pathway in colon cancer cells. This finding suggests that upregulation of EEF2K activity may constitute a novel approach for the treatment of human colon cancer. Introduction Cells consume a tremendous amount of metabolic energy for survival, most of which is used in peptide chain elongation. The rate of peptide chain elongation is modulated by EEF2 which in turn is regulated by EEF2K. 1,2 Under stress or starvation conditions, in order to reduce energy consumption, EEF2K is normally activated by phosphorylation at Ser398 via Ca 2+calmodulin or AMPK/AMP-activated protein kinase, or at Ser499 by cAMP-PRKA (protein kinase A). [3][4][5][6] The activated EEF2K in turn phosphorylates its target EEF2 at Thr56. 7 This phosphorylation inactivates EEF2 resulting in termination of peptide elongation by decreasing the affinity of the elongation factor toward the ribosome. 7,8 Other stimuli such as stress and growth factors that promote protein synthesis must inhibit the activity of EEF2K. Recent studies demonstrate that signaling pathways such as those modulated by RPS6KA1/ p90 (ribosomal protein S6 kinase, 90 kDa, polypeptide 1), MAPK13/p38δ/SAPK4 (mitogen-activated protein kinase 13), MTOR (mechanistic target of rapamycin) and RPS6KB (ribosomal protein S6 kinase, 70 kDa, polypeptide) can directly phosphorylate EEF2K at specific sites that inactivate EEF2K leading to increased protein translation. [9][10][11] Apart from regulating the activity of EEF2K, AMPK can activate autophagy. 12 Autophagy is a self-degradative process by the removal of damaged proteins and organelles to promote a cell survival response to nutritional starvation or stress conditions. Certain examples have demonstrated that AMPK is activated by an elevated AMP/ATP ratio due to cellular and environmental stress such as nutritional deprivation. 13 Autophagy can also be activated in response to many forms of cellular stress beyond nutritional deprivation including reactive oxygen species (ROS) and DNA damage. 14,15 Recent studies have shown that autophagy could also act as an apoptosis-independent programmed cell death. 14 The precise role of autophagy in cellular responses to stress is far from fully elucidated. Previous studies have demonstrated that EEF2K-mediated EEF2 phosphorylation at Thr56 could induce autophagy as a survival mechanism in glioma cells and breast cancer cells, and inhibition of EEF2K could potentiate the efficacy of anticancer agents against cancers. [16][17][18][19][20] Silencing of EEF2K expression by siRNA could reduce both basal and starvationinduced autophagy levels in glioma cells, as characterized by a decrease in autophagic marker MAP1LC3B-II/LC3-II (microtubule-associated protein 1 light chain 3 β-II) levels. 21,22 Eef2k knockout mouse embryonic fibroblasts (MEFs) also show a decrease of basal and nutrient deprivation-induced autophagy levels. 22 However, Chen et al. 23 report that the EEF2K inhibitor A-484954 cannot significantly inhibit cancer cell growth in lung and prostate cancer cells. This finding is consistent with the effect of silencing of EEF2K in both lung and prostate cancer cells. 23 Ryazanov also has found that eef2k knockout mice grow and reproduce normally. 24 Although different effects of EEF2K on cell survival have been observed, the exact mechanisms by which EEF2K regulates cell growth or autophagy are still unclear. Therefore, studies to reveal the role of EEF2K in cancer growth as well as the molecular mechanisms involved in regulating autophagy are highly warranted. To address this issue, we silenced or overexpressed EEF2K in human colon cancer cells to characterize the role of EEF2K in cancer growth and to reveal the molecular mechanism involved in the regulation of autophagy. Our results indicate that autophagy is induced by knockdown of EEF2K in human colon cancer cells. This response is mediated by activation of the AMPK-ULK1 (unc-51 like autophagy activating kinase 1) pathway independent of MTOR inhibition in a fashion different from that during nutritional deprivation. Silencing of EEF2K induces autophagy in human colon cancer cells Previous studies have shown that EEF2K is effective in inducing autophagy in glioma and breast cancer cells. We have therefore investigated whether EEF2K could also induce autophagy in human colon cancer cells. As shown in Figure 1A, silencing of EEF2K using a single siRNA could completely block its downstream target EEF2 phosphorylation at Thr56 in human colon cancer HT-29 and HCT-116 cells, consistent with the fact that reduction of EEF2K activity can reduce the phosphorylation of EEF2 at Thr56. 21,22 However, silencing of EEF2K markedly increased but did not reduce the amount of LC3-II levels in both HT-29 and HCT-116 cells, suggesting that the increased protein synthesis can induce autophagy (Fig. 1A). The same result was obtained using multiple siRNAs targeting different regions of EEF2K (Fig. 1B). These findings were further substantiated by the increase of LC3 dots accumulation in EEF2K-depleted cells (Fig. 1C). As shown in Figure 1C, EEF2K silencing significantly increased LC3 puncta accumulation in both the cytoplasm and nucleus, and most of these LC3 puncta were concentrated in the nucleus. The amount of LC3 dots per cell was significantly increased by more than 6-fold in EEF2K knockdown cells as compared with the control group (Fig. 1D). Furthermore, to distinguish between induction of autophagy and inhibition of autophagic vesicles degradation in EEF2K silenced cells, we analyzed autophagic flux in EEF2K-silenced cells in the absence or presence of lysosomal protease inhibitors E64d and pepstatin A. As shown in Figure 1E, protease inhibitors could further increase both LC3-II and mammalian autophagy-specific substrate SQSTM1/ p62 levels in EEF2K-silenced cells when compared with vehicle treatment, suggesting that LC3-II accumulation in EEF2Ksilenced cells was attributable to promotion of autophagy but not to impairment of autophagic degradation. Taken together, these results indicate that knockdown of EEF2K induces autophagy in human colon cancer cells. BECN1 and ATG7 are required for autophagy in response to EEF2K silencing A series of autophagy-related (ATG) genes are involved in the process of autophagy. We would like to know whether autophagy induced by silencing of EEF2K contributes to regulation of specific proteins of the ATG family. ATG5 and ATG7 (a ubiquitin-activating enzyme homolog), are required for initiation of autophagy. BECN1 is required for the initiation of autophagosome formation. Previous studies show that autophagy can be induced through ATG5-, BECN1-, or ATG7-dependent or independent signaling pathways. 14,25 To determine whether induction of autophagy by EEF2K silencing is related to ATG5, BECN1, or ATG7, we first analyzed the expression levels of ATG5, BECN1, and ATG7 separately by western blot. As shown in Figure 2A, knockdown of EEF2K significantly increased the protein levels of BECN1 and ATG7, but not ATG5. The increase in BECN1 and ATG7 levels in EEF2K-depleted cells is attributed to protein synthesis but not to transcriptional increase (Fig. 2B). In order to further validate the increased BECN1 and ATG7 due to protein synthesis, we blocked protein degradation by MG132. The result showed that protein levels of BECN1 and ATG7 were significantly accumulated in EEF2K-depleted cells after exposure to MG132, suggesting EEF2K silencing does not block protein degradation of BECN1 and ATG7 (Fig. 2C). Taken together, the increase of both BECN1 and ATG7 in EEF2K knockdown cells is not due to blockage of degradation but to protein synthesis. We silenced BECN1 using siRNA in HT-29 cells. The result showed that knockdown of BECN1 could significantly block A and B) hT-29 or hcT-116 cells were transfected with nontargeting control siRNA (sicTL), a single siRNA duplex targeting EEF2K (siEEF2K; A) or multiple siRNAs targeting different regions of EEF2K (siEEF2K; B) for 48 h. eeF2K, phospho-eeF2 (Thr56; p-eeF2), eeF2, Lc3, and AcTB/β-actin were analyzed by western blot. Representative western blot and densitometric analysis normalized to AcTB demonstrating the effect of EEF2K silencing on Lc3-ii levels. (C and D) hT-29 or hcT-116 cells were transfected with control siRNA or a single siRNA duplex targeting EEF2K for 48 h. (C) Representative immunofluorescent images showing redistribution of autophagic marker Lc3 in eeF2K knockdown cells were taken on a confocal microscope. cells were fixed with 3.5% formaldehyde for 10 min, permeabilized with 0.1% Triton X-100 for 10 min, and stained with Lc3 antibody and DAPi. Scale bar: 10 µm. (D) The average number of Lc3 dots per cell was counted in more than 5 fields with at least 100 cells for each group. (E) Representative western blot and densitometric analysis normalized to AcTB demonstrating the effect of lysosomal protease inhibitors e64d plus pepstatin A on EEF2K silencing induced Lc3-ii accumulation. hT-29 cells were transfected with nontargeting control siRNA or EEF2K siRNA. At 3 h after transfection, cells were treated with 10 µg/ml e64d and pepstatin A (Pep A) for 45 h. All quantitative data shown represent the means ± SeM of at least 3 independent experiments. *P < 0.05, $ P < 0.01, and # P < 0.001, vs. the sicTL group (A, B, and D) or vehicle treatment only (E). the accumulation of LC3-II in EEF2K-silenced cells (Fig. 2D). Similar to the effect of BECN1 knockdown, silencing of ATG7 also markedly attenuated the accumulation of LC3-II (Fig. 2E). Moreover, the number of LC3 dots per cell was significantly reduced after silencing of BECN1 or ATG7 in EEF2K knockdown cells (Fig. 2F). Taken together, these results indicate that the upregulation of BECN1 and ATG7 is responsible for autophagy induced by EEF2K silencing. EEF2K silencing-induced autophagy functions to promote colon cancer cell survival It has been reported that targeting EEF2K by siRNA reduces cancer growth in glioma and breast cancer cells. 17,21 Contrary to this, silencing of EEF2K significantly promoted colon cancer cell viability and colony formation, suggesting that EEF2K negatively regulates cell proliferation in human colon cancer cells (Fig. 3A, B, and D). These findings were confirmed by the decrease of cell viability and colony formation in EEF2Koverexpressing cells as compared with control ( Fig. 3A, C, and E). In order to further validate the observation on cell survival in EEF2K-silenced cells, we analyzed cell size and cell number in EEF2K knockdown cells as well as in EEF2K overexpressed cells. Both cell size and cell number in EEF2K-depleted cells were significantly increased compared with the control group ( Fig. 3F and H), while the cell size and cell number were decreased in EEF2K-overexpressing cells ( Fig. 3G and I). Taken together, EEF2K silencing promotes cell growth and cell proliferation in human colon cancer cells. This finding is further substantiated by the result that EEF2K silencing significantly attenuated the antitumor efficacy of oxaliplatin against colon cancer cells (Fig. 3J). on the protein levels of ATG5, BecN1, and ATG7 in eeF2K knockdown cells. hcT-116 cells were transfected with nontargeting control siRNA (sicTL) or a single EEF2K siRNA duplex (siEEF2K) for 48. Before harvested for western blot, cells were treated with MG132 (10 µM) for 12 h. (D and E) Representative western blot and densitometric analysis normalized to AcTB demonstrating the effects of BECN1 siRNA (D) and ATG7 siRNA (E) on Lc3-ii levels induced by EEF2K silencing. hT-29 cells were transfected with nontargeting siRNA, siEEF2K, BECN1 siRNA (siBECN1), ATG7 siRNA (siATG7), siEEF2K plus siBECN1, or siEEF2K plus siATG7 for 48 h. All quantitative data shown represent the means ± SeM of at least 3 independent experiments. *P < 0.05 and $ P < 0.01, vs. the siEEF2K group. (F) The effects of BECN1 siRNA and ATG7 siRNA on Lc3 dots accumulation induced by EEF2K silencing. hT-29 cells were treated with siRNAs against EEF2K, BECN1, or ATG7 as in (D and E). cells were fixed, stained for Lc3, and imaged. The average number of Lc3 dots per cell was counted in more than 5 fields with at least 100 cells for each group and expressed as the means ± SeM of 3 independent experiments. # P < 0.001, vs. the EEF2K siRNA group (siEEF2K). Our findings in colon cancer cells are in accordance with other reports that knockdown of EEF2K by siRNA as well as reduction of EEF2 phosphorylation at effective concentrations by the EEF2K inhibitor A-484954 has little inhibitory effect on cancer cell growth in certain cancer cells including lung cancer and prostate cancer under both serum and serum-free conditions. 23 In addition, overexpression of EEF2K could significantly enhance the antitumor efficacy of oxaliplatin against colon cancer cells, indicating that increase of EEF2K activity can be used to treat colon cancer (Fig. 3K). Furthermore, given the fact that intracellular autophagy promotes cell survival or induces programmed cell death, we investigated the role of autophagy in EEF2K knockdown colon cancer cells. Previous studies report that inhibition of EEF2Kmediated autophagy by silencing of BECN1 can significantly enhance the efficacy of anticancer agents against glioma and breast cancer cells, suggesting autophagy functions as a cellsurvival mechanism. 26,27 In line with this finding, we found that disruption of autophagy by knockdown of BECN1 or ATG7 reduced cell viability and clonogenicity in EEF2K-silenced colon cancer cells ( Fig. 4A and B). Similar to the effect of BECN1 and ATG7 knockdown, the blockage of autophagic flux by protease inhibitors E64d and pepstatin A attenuated the increase of cell viability induced by EEF2K silencing (Fig. 4C). These results indicate that the increase of cell viability by EEF2K silencing in colon cancer cells is attributed to induction of the cell-survival mechanism of autophagy. Considering the fact that autophagy induced by EEF2K silencing acts as a cell-survival mechanism in colon cancer cells, upregulation of EEF2K as an anticancer approach might be feasible in human colon cancer. Silencing of EEF2K cannot potentiate the anticancer efficiency of MK-2206 against colon cancer cells AKT is an important anticancer target. The AKT inhibitor MK-2206 has been well studied for its role in promoting autophagy through activation of EEF2K and inactivation of EEF2 in glioma cells, as indicated by the increase of LC3-II. 16 Consistent with this finding, MK-2206 at effective concentrations such as 0.1-5 µM also significantly promoted autophagy as indicated by the accumulation of LC3-II levels in human colon cancer cells, while EEF2 phosphorylation at Thr56 was not markedly increased in cells treated with MK-2206 at the same concentration range (Fig. 5A). These results indicate that autophagy induced by MK-2206 in colon cancer cells could not be completely attributed to activation of the EEF2K. In order to further validate the effect of EEF2K on AKT inhibition-induced autophagy in colon cancer cells, cells were transfected with siRNA against EEF2K before exposure to MK-2206. As shown in Figure 5B, knockdown of EEF2K could not block the autophagic response triggered by the AKT inhibitor MK-2206 in human colon cancer cells, suggesting that EEF2K does not correlate with MK-2206induced autophagy. This result contradicts the conventional notion that inhibition of AKT by MK-2206 activates EEF2Kdependent autophagy. Although MK-2206 at 10 µM increased EEF2 phosphorylation at Thr56, knockdown of EEF2K could not potentiate the anticancer efficacy of MK-2206, implying that EEF2K cannot serve as an anticancer target for colon cancer therapy ( Fig. 5A and C). EEF2K has been demonstrated to play a critical role in induction of autophagy in glioma cells in response to cellular stress such as AKT inhibition by MK-2206 and nutrient deprivation. However, the underlying molecular mechanisms by which EEF2K controls autophagy remain unknown. Our findings on the role of EEF2K in autophagic response induced by AKT inhibition add new insight to the AKT-mediated autophagy pathway. In addition, our results also suggest that upregulation of EEF2K activity might constitute a therapeutic option for the treatment of certain cancers including human colon cancer. The AMPK-ULK1 pathway is required for autophagy in response to EEF2K silencing Knockdown of EEF2K stimulates protein synthesis, which results in ATP consumption and then leads to an increase of AMP/ATP ratio. 28 In line with this finding, ATP was markedly reduced in colon cancer cells after EEF2K silencing (Fig. 6A). Previous studies show that an increased AMP/ATP ratio can activate AMPK by phosphorylation at Thr172. 29 Consistent with this finding, AMPK was significantly activated in both EEF2K-depleted HT-29 and HCT-116 cells (Fig. 6B). It has been demonstrated that AMPK can activate autophagy by activation of ULK1 or by inactivation of MTOR. [30][31][32] In this study, we found that ULK1 was activated by phosphorylation at Ser555 and dephosphorylation at Ser757, but MTOR was not inactivated in EEF2K-depleted cells (Fig. 6B). These findings suggest that AMPK may enhance autophagy through activation of ULK1 but not via inhibition of MTOR pathway. In order to validate that EEF2K silencing leads to protein synthesis, which depletes ATP levels and then activates AMPK-ULK1-mediated autophagy, we blocked protein synthesis by cycloheximide in EEF2K-depleted cells and then detected the active form of ULK1 and LC3 levels. As shown in Figure 6C, cycloheximide could completely block both ULK1 phosphorylation at Ser555 and LC3-II accumulation induced by EEF2K silencing. This result indicated that protein synthesis induced by EEF2K silencing is responsible for ULK1 activation and autophagy accumulation. In order to further validate whether AMPK and its downstream target ULK1 are involved in autophagy induced by EEF2K knockdown, the effects of PRKAA1 and PRKAA2 (AMPKα) siRNA and ULK1 siRNA on autophagy in EEF2K-depleted cells were analyzed. As shown in Figure 6D, AMPKα siRNA could block ULK1 phosphorylation at Ser555 and significantly reduce LC3-II accumulation in EEF2K knockdown cells, suggesting that AMPK is responsible for EEF2K knockdown-induced autophagy. Furthermore, silencing of ULK1 significantly reduced LC3-II levels, indicating that ULK1 is also involved in autophagy induced by EEF2K silencing (Fig. 6E). These findings were further substantiated by quantification of the amount of LC3 dots per cell, showing that knockdown of AMPKα or ULK1 could completely block LC3 dots accumulation induced by EEF2K silencing (Fig. 6F). In addition, we found that knockdown of AMPKα and ULK1 could significantly inhibit cell growth in EEF2K-depleted human colon cancer cells (Fig. 6G). ROS production is one form of cellular stress that plays a critical role in the induction of autophagy. ROS levels were therefore analyzed using DCFDA staining in HT-29 cells after EEF2K silencing, followed by fluorescence microscopy and flow cytometry. Our results demonstrated that ROS levels were not significantly increased in EEF2K-depleted cells as compared with control or H 2 O 2 treatment, suggesting that ROS production is not involved in autophagy induced by silencing of EEF2K ( Fig. 6H and I). Taken together, these results indicate that autophagy induced by silencing of EEF2K is attributed to activation of AMPK-ULK1 pathway, independent of the suppression of MTOR activity and stimulation of ROS production. Discussion EEF2K is well known for its role in the negative regulation of protein translation through inactivation of EEF2 by phosphorylation at Thr56. Previous studies report that the activated EEF2K can induce autophagy in glioma and breast cancer cells. However, the effect of EEF2K on growth of colon cancer cells as well as the underlying mechanism involved is not understood. In this study, we demonstrate that silencing of EEF2K induces autophagy in colon cancer cells and the AMPK-ULK1 pathway is required for this autophagy. Previous studies report that knockdown of EEF2K by siRNA abrogates autophagy and then results in inhibition of tumor growth, augmentation of apoptosis, and sensitization of glioma or breast cancer to the anticancer agents doxorubicin or MK-2206. 16,17 In contrast, silencing of EEF2K by siRNA enhances autophagy instead of blocking autophagy in human colon cancer cells. The anticancer efficiency of MK-2206 is not further enhanced in colon cancer cells after silencing of EEF2K. This finding in colon cancer cells is in line with accumulating reports that knockdown of EEF2K by siRNA does not inhibit cell growth of lung cancer and prostate cancer cells under both serum and serum-free conditions. 23 In addition, mice lacking EEF2K do not exhibit delays in development and reproduction, indicating that disruption of EEF2K is not sufficient for the inhibition of cell growth. 24 Knockdown of EEF2K can activate EEF2 by reduction of EEF2 phosphorylation at Thr56 resulting in promotion of protein synthesis. It has also been reported that inhibition of EEF2 rapidly arrests protein synthesis and leads to cancer cell growth inhibiton. 33 It is therefore conceivable that activation of protein synthesis by silencing of EEF2K does not arrest cancer growth in colon cancer cells. Taken together, EEF2K performs 2 apparently opposite functions in either promoting or inhibiting both autophagy and cancer growth in cell type-dependent manners. Besides the cytoplasmic LC3-positive autophagosomes during autophagy, recent studies have demonstrated that LC3 punctate signals can also concentrate in the nucleus. For example, the picornavirus foot-and-mouth disease virus can induce LC3 puncta signal to concentrate close to the nucleus in > 95% of the cells within 2 h after infection of CHO cells. 34 C2-ceramide, temozolomide, and arsenic trioxide can induce cytoplasmic and nucleus localization of LC3B in glioblastoma cells U373-MG. 35 . (A and B) hT-29 or hcT-116 cells were transfected with nontargeting control siRNA (sicTL) or EEF2K siRNA (siEEF2K) for 48 h. (A) Silencing of EEF2K reduces ATP level. After transfection, the ATP level was analyzed using the ATPlite Luminescence Assay Kit. (B) Silencing of EEF2K activates AMPK by phosphorylation at Thr172 and activates ULK1 by phosphorylation at Ser555 and dephosphorylation at Ser757, but does not inactivate MTOR. The protein levels of eeF2K, phospho-AMPKα (Thr172; p-AMPKα), AMPKα, phospho-ULK1 (Ser555; p-ULK1 (Ser555)), phospho-ULK1 (Ser757; p-ULK1 (Ser757)), ULK1, phospho-MTOR (Ser2448; p-MTOR), and AcTB were analyzed by western blot. (C) Representative western blot and densitometric analysis normalized to AcTB demonstrating the effect of cycloheximide on EEF2K silencing-induced Lc3-ii accumulation. hT-29 or hcT-116 cells were transfected with nontargeting control siRNA or EEF2K siRNA. At 3 h after transfection, cells were treated with 10 µg/ml cycloheximide (chX) for 45 h. (D-G) hT-29 cells were transfected with control siRNA, EEF2K siRNA, PRKAA1 and PRKAA2/AMPKα siRNA (siAMPKα), ULK1 siRNA (siULK1), sieeF2K plus siAMPKα, or siEEF2K plus siULK1 for 48 h. (D) Representative western blot demonstrating the effects of AMPKα siRNA on phospho-ULK1 (Ser555; p-ULK1) and Lc3-ii levels induced by EEF2K silencing. Densitometric analysis normalized to AcTB demonstrating the effect of AMPKα siRNA on Lc3-ii levels induced by EEF2K silencing. (E) Representative western blot and densitometric analysis normalized to AcTB demonstrating the effect of ULK1 siRNA on Lc3-ii levels induced by EEF2K silencing. (F) The effects of AMPKα siRNA and ULK1 siRNA on Lc3 dots accumulation induced by EEF2K silencing. The average number of Lc3 dots per cell was counted in more than 5 fields with at least 100 cells for each group and expressed as the means ± SeM of 3 independent experiments. # P < 0.001, vs. the EEF2K siRNA group (siEEF2K). (G) The effects of AMPKα siRNA and ULK1 siRNA on the increase of cell viability induced by EEF2K silencing. cell viability was analyzed by MTT assay. (H and I) The effect of EEF2K silencing on ROS generation. hT-29 cells were transfected with control siRNA or EEF2K siRNA for 48 h, and treated with 20 µM DcFDA for 30 min. ROS levels were detected by fluorescence microscopy (H) or by flow cytometry (I). h 2 O 2 (1 mM) treatment for 2 h was used as a positive control group. Scale bar: 20 µm. All quantitative data shown represent the means ± SeM of at least 3 independent experiments. *P < 0.05, $ P < 0.01 and # P < 0.001, vs. the sicTL group (for A), or the EEF2K siRNA group (siEEF2K; for C, D, E, and G). In addition, positive staining of LC3B is observed in the cytoplasm and nucleus of glioblastoma tissue in patients. These findings are consistent with the LC3 localization in EEF2K knockdown colon cancer cells. A recent study shows that CR 3294 significantly induces the accumulation of LC3-II in both the nucleus and cytosol of breast cancer cells. Such LC3-II nucleus accumulation is more abundant when the cells are treated with CR 3294 and hypoxia, 36 indicating HIF1A may promote LC3 puncta to concentrate to the nuclear membrane or into the nucleus. However, the underlying mechanism of LC3 nucleus localization is not completely understood. We subsequently investigated whether certain proteins lead autophagosomes induced by EEF2K silencing into the nucleus. Autophagy is associated with either cell survival or cell death. Under conditions of severe stress, autophagy would increase cell death. 14 However, in some instances, autophagy occurs as part of normal metabolism to remove damaged proteins or organelles and some degree of autophagy formation is necessary for maintaining normal physiology. 37 Therefore, cell survival requires an appropriate degree of autophagy. To date, the role of autophagy in carcinogenesis has not been completely understood. The role of autophagy in cancer in general is quite complex and is likely dependent on the tumor tissue of origin, stage, and the constellation of genetic mutations and epigenetic changes. 38 Resistance of cancer cells to treatment can be associated with both autophagy and inhibition of the more common apoptotic cell death pathway. Under nutrient-deprivation condition, high levels of EEF2K could be activated to block translation elongation to adapt to the stress condition, suggesting that activated EEF2K functions to promote cell survival. 39 Inhibition of EEF2K potentiates the anticancer efficacy of the AKT inhibitor MK-2206 in glioma cells. 16 According to these findings, blockage of EEF2K could represent a treatment option for breast cancer and glioblastoma. Contrary to this finding, we found that inhibition of EEF2K by knockdown could activate autophagy and then promote cell survival under nutrient condition or in the presence of the antitumor drug oxaliplatin, indicating that EEF2K plays an important role in negatively regulating cell growth in human colon cancer cells. Our finding is consistent with the effect of EEF2K inhibitor A-484954 on cell growth in lung and prostate cancer cells. 23 Cancer cells grow and divide much more rapidly than normal cells, thus they have a much higher demand for nutrients and oxygen than nutrient deprivation. Therefore, upregulation of EEF2K could represent an approach to treat certain cancers such as human colon cancer. Knowledge of the mechanisms and molecules involved will help us to understand the role of EEF2K on the growth and survival of different cancers. Taken together, modulation of EEF2K activity in different cancers might constitute a feasible therapeutic method for cancer treatment. The signaling pathways that lead to autophagy under nutrient-deprivation conditions have been clearly characterized. MTOR is a central cell growth regulator that links nutrient signals and autophagy. Under starvation conditions, MTOR, which functions as a critical negative regulator of autophagy, is inhibited. 40,41 However, mammalian cells rarely experience nutrient deprivation under normal physiological conditions. Inactivation of MTOR is not necessary for autophagy under nutrient conditions. 42 Guo et al. 32 report that lipopolysaccharide could induce autophagy via activation of AMPK but not via inhibition of MTOR. Consistent with this finding, autophagy induced by silencing of EEF2K is attributed to activation of AMPK, independent of MTOR inhibition in colon cancer cells under normal nutrient condition. It appears that EEF2K plays opposite roles in either inducing or inhibiting autophagy in different cancer types. The signaling pathway downstream of EEF2K-EEF2 in controlling autophagy remains unknown. In this study, we report for the first time that silencing of EEF2K enhances autophagy-related genes and promotes cell survival via the AMPK-ULK1-dependent pathway (Fig. 7). This finding indicates that the increase of EEF2K activity might reduce the expression of autophagy-related genes such as BECN1 and ATG7, and inactivate the autophagic AMPK-ULK1 pathway. These, in turn, attenuate autophagy and block the growth of human colon cancer cells. We also report that increase of EEF2K activity can suppress autophagy Figure 7. Signaling connections involved in autophagy pathways sensitive to EEF2K silencing in colon cancer cells. eeF2K can directly inactivate eeF2 by phosphorylation at Thr56, which negatively regulates peptide chain elongation. Silencing of EEF2K can upregulate the protein levels of BecN1 and ATG7. The upregulation of protein synthesis by EEF2K silencing can downregulate ATP level and increase AMP/ATP ratio. This in turn directly activates AMPKα by phosphorylation at Thr172 and then activates ULK1 by phosphorylation at Ser555 leading to autophagy. Autophagy induced by EEF2K silencing can promote human colon cancer cell proliferation. inhibition of protein synthesis by cycloheximide can attenuate ULK1 activation and autophagy generation induced by EEF2K silencing. Arrows represent promotion events, blunt arrows indicate suppression events. and enhance the efficacy of drugs against colon cancer cells. In addition, it has been demonstrated that EEF2K is downregulated in human colorectal carcinoma patients (Fig. S1). 43 Therefore, upregulation of EEF2K can be a novel strategy for the treatment of human colon cancer. Considering multiple autophagy pathways regulated by EEF2K, attenuation of autophagy by direct increase of EEF2K activity would be a better method than directly targeting a single autophagy pathway in human colon cancer cells. The approach of targeting against EEF2K has gained some attention for treating glioma and breast cancer. However, the general applicability of this approach is questionable in view of our findings. Cancer tissue typing in terms of its autophagic response toward EEF2K inhibition should be performed to assess whether a specific cancer would benefit from this approach. According to our findings, upregulation of EEF2K activity may be developed as a novel approach for the treatment of human colon cancer. Overexpression of human EEF2K A plasmid pDONR223-EEF2K containing full-length of human EEF2K coding region was obtained from Addgene (Addgene plasmid 23726, USA). 44 Amplification of the coding region was performed by PCR using GeneAmp High Fidelity Enzyme Mix (Life Technologies, 4328216). The PCR conditions were denaturation at 94 °C for 3 min, followed by 20 cycles of 94 °C for 30 s, 55 °C for 30 s and 72 °C for 2 min 30 s, with a final extension step at 72 °C for 10 min. The products were purified using the QIAquick Gel Extraction Kit (Qiagen, 28706) and then inserted into a pcDNA3.1/V5-His TOPO TA expression vector (Life Technologies, K4800-01). The resultant construct encompassing EEF2K with V5 and polyhistidine epitope tags was confirmed by sequencing. Immunofluorescence staining Cells were grown on slides and transfected with siRNAs. After 48 h transfection, cells were washed 3 times with PBS (137 mM NaCl, 2.7 mM KCl, 8 mM Na 2 HPO4, 1.46 mM KH 2 PO4, pH 7.4), fixed with 3.5% formaldehyde in PBS for 10 min, washed once with PBS, permeabilized with 0.1% Triton X-100 (USB, 22686) in PBS for 10 min, and blocked with 0.5% BSA (Sigma, A2153) in PBS for 15 min. Cells were incubated with LC3 antibody (1:150) for 2 h at room temperature, followed by incubation with Alexa Fluor 488 antibody (1:200) for 1 h at room temperature. All antibodies were diluted with 0.5% BSA in PBS. Slides were mounted with Vectashield mounting medium and images were taken with an Olympus FV1000 confocal microscope (Olympus, PA, USA) using a 60 × 1.35 NA oil objective.
2018-04-03T04:34:48.692Z
2014-06-19T00:00:00.000
{ "year": 2014, "sha1": "5fd48af358c6df657e58da75a249e9ef401cf8bc", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/auto.29164?needAccess=true", "oa_status": "BRONZE", "pdf_src": "TaylorAndFrancis", "pdf_hash": "5fd48af358c6df657e58da75a249e9ef401cf8bc", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54945421
pes2o/s2orc
v3-fos-license
Aspects on the Optimization of Die-Sinking EDM of Tungsten Carbide-Cobalt At present, due to their properties, the tungsten carbide-cobalt (WC-Co) composite materials are in huge demand by industry to manufacture special tools, dies/molds and components under erosion. The powder metallurgy is the usual process applied to obtain WC-Co products, but in some cases this process is unable to produce tools of very complex shapes and highly intricate details. Thus, additional conventional and non-conventional machining processes are required. In this context, the electrical discharge machining (EDM) is an efficient alternative process. However, the EDM parameters have to be properly set for any different tungsten carbidecobalt composition and electrode material to achieve an appropriate level of machining performance. In this work, a special grade of tungsten carbide-cobalt was used as workpiece and a copper-tungsten alloy as electrode. Experiments on important EDM electrical and nonelectrical parameter settings with reference to material removal rate, electrode wear ratio and surface roughness were carried out under typical rough and finish machining. This paper contributes with an attempt to provide insightful guidelines to optimize electrical discharge machining of WC-Co composite materials using CuW alloy electrodes. Introduction 1 Electrical discharge machining, generally known as EDM, is a thermoelectric process of non-conventional machining, where electrical discharges occur between two electrodes immersed in a dielectric fluid promoting heating, vaporization and removal of material.In EDM there are no physical cutting forces between the electrode and the workpiece; avoiding mechanical stresses, chatter and vibrations during machining, as reported by Kunieda et al. (2005).For that reason, EDM is widely applied to machine very complex shapes with high accuracy in hard materials. Presently, as shown by Byrne et al. (2003), the tungsten carbide (WC) and its composites (WC-Co) is in huge demand by industry to manufacture different kinds of special tools, dies/molds and components under erosion; due to their properties of high compressive strength, hardness and resistance to wear over a large range of temperatures.As Dreyer et al. (1999) stated the powder metallurgy is the usual process for obtaining WC-Co products.In this process powder raw material is compacted and sintered to the shape of the product.But, in several cases the powder metallurgy is unable to produce some complex shapes with high accuracy in WC-Co composite materials.This leads to the need of applying other processes to reach the final quality required to the product.Mahdavinejad and Mahdavinejad (2005) reported that some conventional machining processes can be used to machine these materials.Efforts have been done with CBN cutting tools, but the results have shown limited success due to the high hardness of WC-Co composite materials, combined with small and intricate geometry of the workpieces.They also mention that, when high accuracy in various kinds of cemented carbides is needed, the only process of conventional machining generally accepted is grinding.However, micro-cracks are produced on the workpiece surface due to the high temperatures generated during machining.Consequently, additional finish operations become necessary to eliminate these cracks and to achieve the final workpiece accuracy.Kulkarni et al. (2002) reported that, among other nonconventional machining processes, electro-chemical machining Paper accepted March, 2010.Technical Editor: Anselmo Eduardo Diniz (ECM) and electrical discharge machining (EDM) are alternative processes that can be used to machine WC-Co composite materials with high accuracy and intricate geometries.On the other hand, Watson and Freer (1980) remarked that ECM produces a resistant oxide layer on the workpiece surface promoting very slow material removal rate; which is further decreased when high cobalt percentage is used in the alloy. In this context, Jahan et al. (2009) pointed out that the EDM technology has been advancing as a promising process to manufacture high precision products in WC-Co composite materials.Lee and Li (2003) investigated the surface integrity of WC samples machined by EDM and remarked that the electrical discharge machining of diverse kinds of WC-Co materials regarding optimized parameters using different electrode materials is rather lacking deep investigation.This observation is in line with the work of Ho and Newman (2003) about the state of the art in EDM, where they showed that a significant number of recent researches are still focused in improving EDM performance measures such as material removal rate, electrode wear rate and surface integrity.Abbas et al. (2007) also reviewed the current research trends in EDM and pointed out that throughout the last decades many researchers have carried out theoretical and experimental tests aiming to optimize the EDM electrical and non-electrical variables for many kinds of workpiece and electrode materials. Therefore, in the present study, a detailed sequence of optimization experiments with reference to important EDM electrical and non-electrical variables on machining of tungsten carbide-cobalt (WC-Co) using copper-tungsten alloy (CuW) electrode was carried out.Three important machining characteristics regarding the EDM performance were investigated.The first one is the material removal rate V w , which means the volume of material removed from the workpiece per minute.The second is the volumetric relative wear ϑ that corresponds to the ratio between the tool electrode wear rate V e and the material removal rate V w .The third characteristic is the average surface roughness R a .Accordingly, this paper contributes with an attempt to provide insightful guidelines to optimize electrical discharge machining of WC-Co composite materials using CuW electrodes. Some Theoretical EDM Background This section presents information related to EDM material removal mechanism in order to enlarge the understanding of the experimental methodology proposed in this study.From investigations of DiBitonto et al. (1989), Mukund et al. (1989), Eubank et al. (1993), König andKlocke (1997), Kunieda et al. (2005), and many other researchers, the material removal in electrical discharge machining is associated with the erosive effect produced when spatially and discrete discharges occur between two electrical conductive materials.Sparks of short duration, ranging from 0.1 to 4000 µs, are generated in a liquid dielectric working gap separating the electrode and the workpiece (10-1000 µm).The discharge energy W e ≈ u e .ie .te [J] released by the generator is responsible for melting a small quantity of material of both electrode and workpiece by conduction heat transfer.Subsequently, at the end of the pulse duration, a pause time begins and the melted pools are removed by forces which can be of electric, hydrodynamic, thermodynamic and spalling nature. Figure 1(A) briefly presents the phases of a discharge in EDM process and Fig. 1(B) shows the concept of EDM.The first phase is the ignition phase which represents the lapse corresponding to the occurrence of the breakdown of the high open circuit voltage û i applied across the working gap until the fairly low discharge voltage u e , which normally ranges from 10 to 40 V.This period is known as ignition delay time t d [μs].The second phase, which instantaneously occurs right after the first one when the current rapidly increases to the discharge current î e [A], is the formation of a channel of plasma surrounded by a vapor bubble.The third phase is the discharge phase, when the channel of plasma of high energy and pressure is sustained for a period of time t e [μs] causing melting and evaporation of a small amount of material in both electrode and workpiece.The fourth, and last phase, is the collapse of the channel of plasma caused by turning off the electric energy, which causes the molten material to be violently ejected.At this time, known as interval time t o [μs], a part of the molten and vaporized material is flushed away by the flow of the dielectric fluid across the gap and the rest is solidified in the recently formed crater and surroundings.During the interval time t o , it also occurs cooling of electrode/workpiece and de-ionization of the working gap, necessary to promote an adequate dispersion of successive discharges along the surfaces of the electrode and the workpiece.This process continues until the geometry of the part is completed.Considering the aforementioned EDM phenomenon, an asymmetric material removal of the electrode and the workpiece can be achieved by the appropriate choice of electrical parameters, electrode polarity, type of working gap flushing, planetary movement of the electrode and thermophysical properties of electrode/workpiece materials.According to Amorim & Weingaertner (2002), another EDM variable strictly associated to the electrical parameters and that influences on the machining characteristics is the duty factor τ, illustrated in Fig. 1.The duty factor can affect the material removal rate V w , the volumetric relative wear ϑ and the workpiece surface roughness R a . The duty factor τ is the ratio between the pulse duration t i and the pulse cycle time t p (t i + t o ).The value of duty factor τ should be chosen as high as possible.The usual procedure to increase the value of τ is done by reducing the pulse interval time t o and keeping the pulse duration t i constant.This procedure leads to the increase of discharge frequencies promoting better rates of V w and lower values of ϑ.An important aspect regarding the choice of high values of τ is associated with the elevation of the contamination concentrated in the working gap.According to Schumacher (1990), some concentration of sub-microscopic particles, fibers or moisture drops in the working gap can reduce the ignition delay time t d .It happens because these particles arrange themselves in such a way that a kind of a bridge occurs intensifying the electric field.This then quickly fires another discharge.On the other hand, very high values of duty factor τ is responsible to promote many short-circuits, and arcdischarges causing low values of V w and high levels of ϑ. In current practice of EDM of metal alloys conservative decisions are taken to gain safer machining performance as stated by Wang et al. (1995).This means the use of duty factor τ = 0.5 (t i = t o ) in order to avoid short-circuit, arc-discharges and good flushing conditions.For duty factor higher than 0.5 (t i > t o ) the machining conditions might become worse and arcing damages can occur.Values of duty factor lower than 0.5 (t i < t o ) lead to low machining rate. Experimental Methodology In this work, a progression of experiments on the electrical discharge machining of a special grade of tungsten carbide-cobalt using copper-tungsten electrodes under rough and finish process conditions was performed.The tests were designed to adequately assess the effects of the input EDM independent parameters namely discharge current i e , discharge duration t e , open circuit voltage u i and duty factor τ on the EDM output dependent machining characteristics material removal rate V w , volumetric relative wear ϑ and workpiece surface roughness R a .The discharge currents adopted in here represent typical values for rough and finish EDMachining. Experimental Procedure The optimization of EDM machining characteristics was carried out into three stages.The range of the variables to perform the experiments is shown in Table 1.The implemented sequence for each stage is described as follows: First Stage -Effect of Discharge Duration (t e ): as reported by Masuzawa ( 2001), the discharge energy W e ≈ u e .ie .te [J] induced in the working gap is the main EDM factor responsible for the process performance, i.e., removal rate, electrode wear and surface integrity.Thus, at the first stage of this work, the value of duty factor τ is fixed at 0.5 and the machining characteristics is optimized against the variation of discharge duration t e .Rough and finish machining regimes are analyzed for discharge currents i e of 32 A and 6 A, under an open circuit voltage u i , respectively at 80 V and 120 V.The range of discharge duration t e varies from 3.2 to 50 µs for the finish machining and for EDM under the rough machining, t e goes up to 400 µs. Second Stage -Effect of Duty Factor (τ): here the optimum discharge duration t e that promoted the best machining characteristics is kept constant and the values of pulse interval time t o are modified.This promotes the variation of the duty factor τ in order to further improve the machining performance.The range of the interval time t 0 was specified as 100; 50; 25; 12,8 µs for the finishing machining and for the rough machining as 200; 100; 50; 25 µs. Third Stage -Effect of Open Circuit Voltage (û i ): the variable (û i ) considerably affects the working gap size.Consequently, the open circuit voltage û i has to be properly set to guarantee a proper dispersion of the sparks along the frontal area of the pair electrode/workpiece and to provide good flushing conditions.Now at the last stage, using the best discharge duration t e and the most appropriate duty factor τ obtained in stage two, the open circuit voltage u i is scanned from 80 to 200 V to verify its influence over the EDM machining performance under rough and finish regimes. Materials and Equipment (i) Workpiece: square samples of tungsten carbide-cobalt 20 mm wide and 10 mm depth with R a = 0.8 μm on the surface to be machined were prepared by Wire EDM.The chemical composition of the WC-Co composite material is as follows: 88.2% of WC, 11.5% of Co+Ni and 0.3% of impurities.The WC average grain size is 2.0 µm, considered as fine grain size.This alloy has 14.30 g/cm 3 of density, 1240 HV10 hardness, 2597°C of melting point and 420 kgf/mm 2 of compressive strength.Figure 2 (iv) Flushing method: a hydrocarbon dielectric fluid with 3 cSt at 40°C, flash point of 134°C and 0.01 wt.% of aromatic contents were used for the tests.In this work shallow cavities of small diameter were planned to be machined.For that reason, a jet of dielectric fluid directly against the gap and the immersion of the pair electrode/workpiece into the dielectric were applied as flushing technique.This method was sufficient to evacuate the excess of eroded particles away from the working gap as well as to promote adequate cooling.In order to further improve the flushing efficiency, an alternation between periods of machining U [s] and periods of electrode retraction with no discharges R [s] was introduced, as shown in Fig. 4. The values of U and R were defined after pilot tests. Results and Discussion The objective of this study is to provide guidelines to optimize the EDM of tungsten carbide-cobalt using copper-tungsten electrodes under rough and finish machining.In order to achieve this target the experiments were carried out into three stages.The first stage deals with the variation of discharge duration t e , the second aims at using the best results of the first stage to analyze the influence of duty factor τ and the last stage concerns the influence of the open circuit voltage u i . First Stage -Effect of Discharge Duration t e The discharge energy W e ≈ u e .ie .te [J] induced in the working gap is the main EDM factor responsible by the process performance, i.e., removal rate, electrode wear and surface integrity.Thus, the discharge currents i e = 6 and 32 A were chosen to analyze the EDM behavior under finish and rough machining conditions over the variation of discharge duration t e .The initial value of duty factor τ = 0.5 was established because good EDM process stability is promoted. The results of the material removal rate V w against the variation of discharge duration t e for negative copper-tungsten electrode are summarized in Fig. 5.The global values of V w for the discharge current î e = 32 A are much higher than those achieved for î e = 6 A. This occurs because the material removal rate V w is dependent on the energy W e [J] released into the working gap, i.e., the increase of discharge current i e leads to higher values of V w .Here, the spalling phenomenon, which consists of the separation of small volumes of WC ceramic phase from the base material, is also responsible for this behavior.The spalling effect is more prominent with the increase of discharge current, in that case causing an easier separation of small volumes of WC material promoting higher values of V w .This spalling effect has been also observed in the study of Lawers et al. (2005), with electrical discharge machining Si 3 N 4 -based ceramic material with addition of conductive phases. Additionally, it can be noticed that as the discharge duration t e increases, regardless of the value of discharge current i e , the rate V w also increases up to a maximum value for a specific optimum t e .The highest material removal rate V w is of approximately 4.2 mm³/min for î e = 32 A to the optimum t e = 200 µs.After this point V w starts to decrease.It arises from longer discharge duration t e that diminishes the pressure and energy of the channel of plasma over the molten material of the electrode and the workpiece.As a consequence, process instability in the form of short circuits and arc-discharges takes place lowering the material removal rate V w .Figure 5 also shows that for discharge current î e = 6 A the variation of discharge duration t e from 3.2 to 50 μs did not affect significantly the material removal rate V w .This is related to the small working gap size, which hinders the total molten material to be properly expelled from the gap at the end of the discharge.Consequently, the molten and vaporized material solidifies in the recently formed crater and surroundings.The best value of V w = 0.5 mm 3 /min for the discharge current i e = 6 A is achieved for t e = 12.8 μs. The volumetric relative wear ϑ represents the ratio between the electrode wear rate V e [mm 3 /min] to the workpiece material removal rate V w [mm 3 /min].The results of ϑ [%] as a function of discharge duration t e for currents i e = 6 and 32 A are shown in Fig. 6.For discharge current i e = 6 A increasing the discharge duration t e , a decrease of ϑ is observed, reaching a minimum of about 20% at the optimum t e = 12.8 μs.It is also seen that the variation of discharge duration t e did not affect significantly the values of ϑ for the rough machining with i e = 32 A. For this current i e the volumetric relative wear ϑ presents a trend of 18% up to the optimum t e = 200 μs.Independently of the discharge duration t e the enlargement of discharge current (i e = 6 to 32 A) promoted lower volumetric relative wear ϑ when machining with CuW electrode.This phenomenon comes from the CuW electrode chemical composition (30% Cu and 70% W).The elevated concentration of the element tungsten with high melting point (3410°C) promotes higher resistance of the electrode against the thermal wear degradation during machining.The result is less electrode wear rate V e and better material removal rate V w , which causes a decrease of volumetric relative wear ϑ (Ve/Vw) when the discharge current i e increases.Figure 7 shows the results of the surface roughness R a versus the discharge duration t e .The lowest R a = 1.1 μm is reached for the discharge current i e = 6 A and t e = 3.2 μs.For i e = 6 A the variation of discharge duration t e from 3.2 to 50 μs did not affect considerably the average surface roughness R a .This has to do with the small working gap that does not promote an adequate evacuation of the eroded particles, but instead accumulate them in the crater and surroundings.When machining with i e = 32 A it is detected an increase of the surface roughness R a as the discharge duration t e is raised.This is due to the higher values of material removal rate V w that produces deeper and larger craters on the surface of the workpiece. Second Stage -Effect of Duty Factor τ From the optimized values of discharge duration t e obtained in stage one, the duty factor τ was varied to analyze its influence on the EDM performance.The duty factor τ of 0.5 was the starting point, as shown in Fig. 8.The optimum discharge duration t e is kept fixed and the interval time t o is modified.From the best conditions for finish machining (t e = 12.8 µs, i e = 6 A) the duty factor τ is reduced from 0.5 to 0.11 by increasing the interval time t o within the range of 12.8; 25; 50; 100 µs.It observed from Fig. 8 that this variation of duty factor τ does not affect significantly the values of material removal V w . For the discharge current i e = 32 A at the optimum discharge duration t e = 200 μs the duty factor τ is raised from 0.5 up to 0.89 by lowering the interval time t o at the sequence of 200, 100, 50, 25 µs.It is noticed a little increase of the material removal rate V w ≈ 4.5 mm 3 /min for the duty factor τ of 0.67.Higher values of duty factor (τ = 0.8 and 0.89) reduces the material removal rate.This is caused by the low interval times t o causing overconcentration of debris in the working gap, which then brings instability into the working gap either in the form of arc discharge pulses or short-circuit pulses.Figure 9 shows that for both rough and finish machining (i e = 32 and 6 A) the variation of duty factor τ significantly influences the values of volumetric relative wear (ϑ = V e /V w ).For the discharge current i e = 32 A the increase of duty factor τ from 0.5 to 0.89 promotes an elevation of the volumetric relative wear ϑ up to about 22%.This is due to the low interval times t o that promote high concentration of EDM byproducts in the working gap, reducing the material removal rate V w .For the finish machining the decrease of duty factor τ from 0.5 to 0.11 reduces the volumetric relative (ϑ = V e /V w ).Here it occurs because longer interval times t o improve the flushing conditions by reducing the occurrence of arc-discharges and short-circuits promoting more stability to the machining.From Fig. 10 it is clearly seen that the surface roughness for rough machining is not extensively affected by the variation of the duty factor τ, remaining at about R a = 3.5 μm.This has to do with the fact that the duty factor was varied by the modification of the interval time t o , which does not influence on the energy W e = i e .ue .te [J] supplied to the machining process.For finish machining, the reduction of duty factor from 0.5 to 0.11 caused insignificantly decrease on the surface roughness R a from 1.5 to approximately 1.2 μm. Third Stage -Effect of Open Circuit Voltage u i Figure 11 shows the influence of the variation of the open circuit voltage u i on the results of material removal rate V w for the EDMachining of tungsten carbide-cobalt composite material.For the rough machining with i e = 32 A, duty factor τ = 0.67 and optimum t e = 200 μs, the variation of the open circuit (u i = 80 to 200 V) provides a little raise on the value of V w to 5.2 mm 3 /min.This is due to the intrinsic relation of the open circuit voltage û i with the size of the working gap, i.e., the distance between the electrode and the workpiece during the electric discharge occurrence.For the rough EDM conditions (i e = 32 A) higher values of û i give support to the occurrence of larger working gaps.This fact promotes enhancement of flushing of the eroded particles away from the working gap causing an improvement of the material removal rate V w . From Fig. 11 it is observed that, for finish machining with i e = 6 A under the optimum electrical parameters, the variation of the open circuit voltage u i does not affect the results of material removal rate V w .This happens because the variation of u i from 80 to 200 V does not widen the working gap so that the flushing conditions could be improved to provide better values for the material removal rate V w. Figure 12 presents the results of volumetric relative wear ϑ for the variation of open circuit voltage u i .In EDM the very small byproducts generated by the dielectric burning tends to adhere over the surface of the electrode promoting the formation of a protective layer against the wear.These byproducts' concentration in the working gap depends on its size, i.e., the larger the working gap the easier the byproducts are removed by the flushing.For rough machining with i e = 32 A, the increase of u i provided a working gap growth causing better flushing conditions, which then lowered the concentration of the byproducts.This has prevented the formation of the protective layer on the surface of the electrode causing an increase of the values of electrode wear rate V e .As a consequence, the volumetric relative (ϑ = V e /V w ) is increased when the open circuit voltage varies from 80 to 200 V.For the finish machining (i e = 6 A) the variation of u i did not affect the values of the volumetric relative wear. Figure 13 shows that the elevation of the open circuit voltage u i for the rough machining with i e = 32 A increased considerably the surface roughness R a from about 3.2 to 5.5 μm.This takes place because the variation of u i raised the material removal rate V w promoting deeper and larger craters on the surface of the tungsten carbide-cobalt workpiece.For finish machining (i e = 6 A) it is observed that the levels of the surface roughness R a is not influenced by the different values of the open circuit voltage u i . Conclusion In EDM some major tasks concern achieving high material removal rate, small electrode wear and low surface roughness.Thus, in this work, a sequence of experiments were performed to provide useful guidelines to optimize the die-sinking EDM of tungsten carbide-cobalt (WC-Co) using copper-tungsten (CuW) electrode under rough and finish regimes.Important EDM parameters were investigated with reference to the workpiece material removal rate V w , the volumetric relative wear ϑ and the average surface roughness R a .From the experimental investigations the following conclusions can be drawn: (i) The increase of discharge duration t e promotes higher material removal rate V w and produces poorer surface texture R a for rough machining regimes, but does not affect considerably the values of V w and R a for finish machining.The volumetric relative wear ϑ reduces with the increase of t e for finish machining, but is not affected for rough machining regime. (ii) The variation of the duty factor τ slightly improves the material removal rate V w for the both rough or finish machining regimes.The surface texture R a is not affected significantly by the variation of the duty factor τ. The volumetric relative wear ϑ for rough and finish regimes is significantly influenced by the variation of the values of the duty factor. (iii) The open circuit voltage u i increases the material removal rate V w and the surface texture R a for rough machining regime.For finish machining the values of V w and R a does not change with the variation of the open circuit voltage u i .The volumetric relative wear ϑ for rough machining gets higher with the rise of the open circuit voltage u i , but its values for finish regime are not affected. voltage, V V e = electrode wear rate, mm 3 /min V w = material removal rate, mm 3 /min W e = discharge energy, J Figure 1.(A) Schematic representation of the phases of an electric discharge in EDM and the definition of duty factor τ and (B) the concept of EDM phenomenon. shows a scanning electron microscope (SEM) image of WC grains and the Co substrate of the WC-Co workpiece used in this work. Figure 2 . Figure 2. SEM image of the surface of tungsten carbide-cobalt workpiece. (Figure 3 . Figure 3. Assembly of WC electrode and WC-Co workpiece at the EDM machine tool.(iii)Machine tool: a Charmilles ROBOFORM 30 CNC diesinking machine tool, equipped with an isoenergetic generator that allows setting the value of discharge duration t e was used throughout the experiments.An important parameter is the ignition delay time t d .The time t d elapses between applying the open circuit voltage û i across the gap until the discharge current î e is established.When finish EDMachining is carried out longer times of t d are applied.In this work, t d is set as 30% of discharge duration t e for finish machining.For rough EDMachining operations lower times of t d are used because the working gap is normally large.Here t d is set to be 15% of discharge duration t e .These values of t d were established based on pilot tests results.(iv)Flushing method: a hydrocarbon dielectric fluid with 3 cSt at 40°C, flash point of 134°C and 0.01 wt.% of aromatic contents were used for the tests.In this work shallow cavities of small diameter were planned to be machined.For that reason, a jet of dielectric fluid directly against the gap and the immersion of the pair electrode/workpiece into the dielectric were applied as flushing technique.This method was sufficient to evacuate the excess of eroded particles away from the working gap as well as to promote adequate cooling.In order to further improve the flushing efficiency, an alternation between periods of machining U [s] and periods of electrode retraction with no discharges R [s] was introduced, as shown in Fig.4.The values of U and R were defined after pilot tests. Figure 4. Series of pulses U [s] followed by a pause time R [s]. Figure 5 . Figure 5.The results of material removal rate V w against the variation of discharge duration t e . Figure 6 . Figure 6.Volumetric relative wear ϑ against the variation of discharge duration t e . Figure 7 . Figure 7. Average surface roughness R a against the variation of discharge duration t e . Figure 8 . Figure 8.The results of material removal rate V w against the variation of duty factor τ. Figure 9 . Figure 9. Volumetric relative wear ϑ V w against the variation of the duty factor τ. Figure 10 . Figure 10.The results of surface roughness R a against the variation of the duty factor τ. Figure 11 . Figure 11.Material removal rate V w against the variation of the open circuit voltage u i . Figure 13 . Figure 12.Volumetric relative wear ϑ against the variation of the open circuit voltage u i . Table 1 . Stages and electrical and non-electrical parameters values for the optimization tests. 498 / Vol.XXXII, No. 5, December-Special Issue 2010 ABCM
2018-12-14T14:51:02.418Z
2010-12-01T00:00:00.000
{ "year": 2010, "sha1": "bae59bfb819f31103fb2407749b35c724c986b99", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/jbsmse/v32nspe/a09v32nspe.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bae59bfb819f31103fb2407749b35c724c986b99", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
261494251
pes2o/s2orc
v3-fos-license
Master Integrals for Four-Loop Massless Form Factors We present analytical results for all master integrals for massless three-point functions, with one off-shell leg, at four loops. Our solutions were obtained using differential equations and direct integration techniques. We review the methods and provide additional details. Introduction Form factors are important quantities in Quantum Chromodynamics, N = 4 super Yang-Mills theory and other theories.In the simplest cases, a single operator is inserted in a matrix element between two massless states, and all propagating particles are massless.Such form factors can be constructed from vertex diagrams with two legs on the light cone, p 2 1 = p 2 2 = 0, such that the corresponding Feynman integrals depend on one mass scale, q 2 = (p 1 + p 2 ) 2 .In this paper, we consider such integrals in dimensional regularization, where d = 4 − 2ϵ is the number of space-time dimensions used to regularize ultraviolet, soft and collinear divergences. Two-loop corrections to form factors were computed more than thirty years ago [1][2][3][4].The first three-loop result was presented in Ref. [5] and later confirmed in Ref. [6].Analytic results for the three-loop form factor integrals were presented in Ref. [7].In Ref. [8], the results of Ref. [7] were used to compute form factors at three loops up to order ϵ 2 , i.e., transcendental weight eight, as a preparation for future four-loop calculations.These integrals and form factors have been confirmed in [9]. Indeed, four-loop calculations have taken place since then.First analytical results for the four-loop form factors were obtained for the quark form factor in QCD in the large-N c limit, where only planar diagrams contribute [10], and for the fermionic contributions [11].All the planar master integrals for the massless four-loop form factors were evaluated in [12].The n 2 f results were obtained in [13], and the complete contribution from color structure (d abcd F ) 2 was evaluated in [14] and confirmed in [15].For the quark and gluon form factors, all corrections with three or two closed fermion loops were calculated in [16,17], respectively, including also the singlet contributions.The fermionic corrections to quark and gluon form factors in four-loop QCD were evaluated in [18].The four-loop N = 4 SYM Sudakov form factor was analyzed in [19] and analytically evaluated in [20].The complete analytical evaluation of the quark and gluon form factors in four-loop QCD was presented in [21].The four-loop corrections to the Higgs-bottom vertex within massless QCD were evaluated in [22]. In these calculations, two methods of evaluating master integrals were applied by our two competing groups: the method of differential equations and the evaluation by integrating over Feynman parameters: the first one was applied in [10,11,13,14] and the second one in [12,[15][16][17].Then our two groups combined their forces and applied these two methods when collaborating [18,[20][21][22].A crucial building block for these form factor calculations were the solutions for the four-loop master integrals, which is the topic of this paper. In general, four-loop form factors with one off-shell and two massless legs can involve integrals belonging to 100 reducible and irreducible top-level topologies with 12 lines, as shown in figure 1, or sub-topologies thereof.In this work, we present analytical solutions for the ϵ expansion of all master integrals in these topologies.The results are given in terms of zeta values and multiple zeta values (MZV), and are complete at least up to and including weight 8, as required for N 4 LO calculations. The remainder of this paper is organized as follows.In section 2, we describe how we applied the method of differential equations.In a subsection, we describe peculiarities of using integration by parts (IBP) to perform reduction to master integrals.In section 3, we describe how we applied the method of analytical integration over Feynman parameters. In a subsection, we discuss a dedicated reduction scheme for integrals with many dots.In section 4, we comment on the explicit solutions for the master integrals, that we provide in the ancillary files.In section 5, we compare the two basic methods that we used. Evaluation with differential equations 2.1 Two-leg off-shell integrals, reduction to ϵ-form The four-loop form-factor Feynman integrals that we evaluated have the following form: where D i are propagators and/or numerators raised to some integer powers ν i (indices). For the calculations presented in this section, we choose the last six indices for numerators, while the first twelve indices can be positive, i.e. they can correspond to propagators. For example, for one of the most complicated diagrams for four-loop form factors the propagators and numerators can be chosen as where p 1 and p 2 denote the outgoing momenta of the two massless legs. According to the strategy of IBP reduction, which was discovered more than forty years ago [23,24], the evaluation of integrals of a given family can be reduced to the evaluation of the corresponding master integrals.In the next subsection, we describe how we did this in the case of four-loop form-factor integrals. Let us now turn to the method of differential equations [25][26][27][28].Since p 2 1 = p 2 2 = 0, the integrals of our family depend only on one variable q 2 = (p 1 + p 2 ) 2 , which we often set to (-1) in intermediate expressions, since this dependence is easily recovered from dimensional analysis (namely G ν 1 ,...,ν 18 ∝ q 2 2d− ν i ).In order to make use of the differential equations method, we follow the approach of Ref. [29].We consider the family of the same topology as in figure 2 now assuming that p 2 2 = xq 2 , and derive the differential system in the variable x.In what follows, we will use the terms two-scale and one-scale master integrals to refer to the master integrals of the family with generic x and with x = 0, respectively.In the point x = 1 we have p 2 2 = q 2 and we can assume not only that p 2 1 = 0, but also that p 1 = 0, as can be clearly seen from, e.g., Feynman parametric representation.The corresponding propagator-type master integrals were evaluated in an ϵ-expansion more than ten years ago [30] and are known even up to weight twelve [31].The idea is that the differential equations allow us to transfer data from the simple point x = 1 to the desired point x = 0.The more involved IBP reduction of the family with p 2 2 ̸ = 0 appears to be a fair price for the advantages of the differential equations method.The system of differential equations for the vector of master integrals j has the usual form where M (ϵ, x) is a matrix, rational in x and ϵ. For the family in figure 2 the size of the system (the number of two-scale master integrals) is as large as 374, but even larger systems appear in other families.While not immediately obvious, a far more important characteristic of the complexity is the position of singular points in x.Since our final results for the one-scale master integrals involve only non-alternating MZV sums, one might speculate that the only singular points of the emerging differential systems are x = 0, 1, ∞.And indeed, this is the case for many families that we considered.However, a few systems also contained singularities at other points.In particular, the system for the family in figure 2 contained singularities for x ∈ {−1, 0, 1/4, 1, 4, ∞}. (2.4) Systems for other families contained only some of these singularities.Note that the singularity at x = 1/4 is especially troublesome as it lies on the segment [0, 1], exactly on the integration path of the evolution operator connecting the point of interest, x = 0, and the point x = 1, where the boundary conditions are fixed.The general solution of the differential system does have a branch point at x = 1/4.From physical and technical arguments, this point can not be a branch point of the specific solution on the first sheet (but it is a branch point on other sheets).This requirement provides yet another check of the correctness of our procedure.We may check the absence of a branch point by comparing the results obtained by shifting the integration contour slightly up and down from the real axis, which corresponds to the change x → x+i0 and x → x−i0, respectively.As the coefficients of the differential system are all real, those two prescriptions are related by complex conjugation.Therefore, the absence of a branch point at x = 1/4 can be established by checking that any of these two prescriptions leads to real-valued results.For definiteness we will assume that the integration contour is shifted up. In order to reduce the system to ϵ-form we use the algorithm of Refs.[28,32] (see also section 8 in Ref. [33]) as implemented in Libra [34].We had to introduce algebraic letters In this way, we reduce the system to the following form: where and S k are some constant matrices.Note that there is no variable simultaneously rationalizing x 1 , x 2 , and x 3 , as they correspond to more than 3 square-root branching points: 0, ∞, 4, 1/4, see Ref. [32].However, it appears that the weights depending on x 1 , x 2 and x 3 never appear together in one iterated integral.More precisely, the iterated integrals which appear in our results fall into one (or a few) of the following four families: 1. those containing letters in the alphabet {w 1 , w 2 , w 3 }, The integrals of the first two families are readily expressed via Goncharov's polylogarithms with indices 0, ±1 (for the second family we have to pass to x 1 ).For the integrals of the third family, we pass to the variable y 3 = √ 3 2x 3 .When x varies from 0 to 1, y 3 also varies from 0 to 1. Taking into account that we obtain the result for the integrals of the third family in terms of Goncharov's polylogarithms with indices 0, ±1, ±i √ 3. The last family is the most complicated.We introduce the variable y 2 = P 8 = 1 2 + ix 2 .Taking into account our prescription x → x + i0, we establish that y 2 follows the path C depicted in figure 3 when x varies from 0 to 1.We replace this path by the equivalent path C ′ depicted in the same figure .Since we obtain the result for the iterated integrals of this family in terms of Goncharov's polylogarithms with indices 0, 1, e ±iπ/3 , 1/2 and argument e iπ/3 .We can normalize the argument to 1 by using a homogeneity property of polylogarithms (as usual, we must exercise a certain care when dealing with polylogarithms with the trailing zeros) Finally, the integrals of the fourth family are expressed via Goncharov's polylogarithms with indices 0, 1, e −iπ/3 , e −2iπ/3 , 1 2 e −iπ/3 and unit argument.To summarize, we have the following correspondence • Families 1,2: integrals are expressed via G(a|1) with a k ∈ {0, ±1} (alternating MZVs), • Family 3: integrals are expressed via G(a|1) with a k ∈ {0, ±1, ±i √ 3}, • Family 4: integrals are expressed via G(a|1) with a k ∈ {0, 1, e −iπ/3 , e −2iπ/3 , 1 2 e −iπ/3 }. Note that the polylogarithms for the fourth family are not real-valued, so, as we explained above, the check of "real-valuedness" of the integrals from the fourth family provides a non-trivial check of our setup.After we have obtained the results for the coefficients of the ϵ-expansion of all one-scale master integrals in terms of the above mentioned polylogarithms, we used the PSLQ [35] algorithm to recognize the results in terms of simple, non-alternating MZVs. IBP reduction of two-scale integrals There are many public and private codes to perform IBP reduction.In this work, we applied the public code FIRE [36,37] and the private code Finred by A. von Manteuffel. The IBP reduction of one-scale four-loop form-factor integrals is rather complicated and the IBP reduction of two-scale integrals when applying the method of differential equations is even more complicated.An important point is to reveal a minimal set of master integrals. It is also important to find a basis such that the only denominators in IBP reductions are either of the form ad + b, where d is the space-time dimension and a and b are rational numbers, or simple polynomials depending only on kinematic invariants and/or masses.Otherwise, we refer to factors in denominators as bad.To get rid of bad denominators, i.e. to turn to a basis in which no bad denominators appear, one can apply the public code described in [38] (see also Ref. [39]). The presence of bad denominators can essentially complicate the IBP reduction.It can happen that it is not possible to get rid of bad denominators.Two examples of such a situation were found in Ref. [40] in the context of five-loop massless propagators.It turned out that there was a hidden relation involving four master integrals with eleven positive indices from four partially overlapping sectors. For four-loop massless form factors, a similar situation takes place at the level of nine positive indices.This hidden relation is relevant for two one-scale families, one of which is the family corresponding to the graph of figure 2, ) where all terms from lowers sectors (of levels less than nine) are omitted and dots in the indices mean that the last six indices are zero.The complete relation can be found in a file attached to this paper.We have derived this relation by running FIRE with two different options (with no presolve and without it; this option turns off partial solving of IBPs before index substitutions and thus leads to a reduction in other direction), so that it is clear that this relation is a consequence of IBP relations.This relation has previously been depicted diagrammatically in [15], where it had been derived with Finred, using integration-by-parts identities generated from seed integrals in a common parent topology. Using the same strategy we have derived a hidden relation also for the two-scale integrals of this family.A file with this relation is also attached to the paper.It has the same form as Eq. ( 2.10) at level nine but the contribution from lower levels is different and depends on x.In fact, this relation should transform into the corresponding relation in the one-scale case in the limit x → 0 but to see this explicitly is more complicated in comparison to the derivation described above.In each of the two cases, the additional relation is used to reduce the number of the master integrals.In the new basis, all the bad denominators successfully disappear. Let us mention, for completeness, that relations between a current set of the master integrals can be revealed with the help of various symmetries.This procedure is usually included into codes to solve IBP relations.An explicit example, together with a discussion of various ways of looking for extra relations between master integrals, can be found in [41] in the context of the master integrals needed for the computation of the lepton anomalous magnetic moment at three loops. For the IBP reduction of one-scale integrals, both our groups applied modular arithmetic (for early discussions of such techniques see e.g.[42,43]).One of our groups used the private code Finred by A. von Manteuffel, which was the first code to solve IBP relations with the help of modular arithmetic and another group used FIRE.For the IBP reduction of two-scale integrals appearing within the method of differential equations, we applied FIRE, also with modular arithmetic.We first performed rational reconstruction to transition from modular arithmetic to rational numbers.Then, after fixing d or x, we ran Thiele reconstruction [44] to obtain a rational function of the other variable.Since we have a good basis the denominators factor into a function of d and a function of x.Hence the worst possible denominator of the coefficients of the master integrals, i.e. the least common multiple of all occurring denominators, is obtained by multiplying the univariate factors.Knowing the worst denominator, we could multiply the functions being reconstructed by it, and perform an iterative Newton-Newton reconstruction [44], i.e. apply two Newton reconstructions with respect to two variables. 3 Evaluation by integration over Feynman parameters Finite integrals, analytical integration Perhaps the most straightforward way to solve a Feynman integral is the direct integration of its Feynman parametric representation.What we wish to obtain is the Laurent expansion of the integral in the regulator ϵ.We find it convenient to work with integrals which are finite for ϵ → 0, such that we can expand the integrand in ϵ and then perform the integrations for the Laurent coefficients.It has been shown in [45,46] that one can always express an arbitrary (divergent) Feynman integral as a linear combination of a basis of "quasi-finite" integrals, which have convergent Feynman parameter integrations for ϵ → 0. Requiring also the Γ prefactor involving the superficial degree of divergence to be finite, one can also choose completely finite integrals for ϵ → 0 [9].In this construction, the finite integrals may live in higher dimensions and may have "dots", i.e. higher powers of propagators (see [47,48] for generalizations of quasi-finite integrals).A systematic list of such finite integrals can be obtained easily with the program Reduze 2 [49].Expressing a divergent Feynman integral in terms of a basis of finite integrals, all poles in ϵ become explicit in the coefficients of this rewriting.The explicit linear relations needed to express an integral in terms of finite basis integrals are obtained from dimension-shift identities and integration-by-parts reductions, which will be discussed in more detail below. The integrands of the finite integrals can easily be expanded in ϵ.In general, the integrations of the coefficients can lead to complicated special mathematical functions and may be difficult to perform.A given Feynman parametric representation for some Feynman integral can have the property of "linear reducibility".For linearly reducible integrals, there exists an order of integrations such that each integration can be performed in terms of multiple polylogarithms in an algorithmic way.Thanks to the algorithms of [50][51][52] and their implementation in HyperInt [53], a suitable order of integration can be determined with a polynomial reduction algorithm.If a representation is not linearly reducible, it is sometimes still possible to perform a rational transformation of the Feynman parameters such that the resulting new parametrization is linearly reducible.For integrals resulting in elliptical polylogarithms or more complicated structures, no linearly reducible representations exists.Currently, no algorithm is known to determine unambiguously, whether a linearly reducible parameterization exists for a given Feynman integral. In our case, we have been able to find linearly reducible parametric representations for almost all topologies, with the only exception being two trivalent (top-level) topologies depicted as the last two entries in figure 1 of [15].For these two topologies, the method of differential equations allows us to obtain the solutions from ϵ-factorized differential equations and integrations in terms of multiple polylogarithms, as explained in section 2. We can not exclude that also for these topologies a linearly reducible representation could be found.We emphasize that, in practice, direct integrations allowed us to derive complete analytical solutions through to transcendental weight 7 for all Feynman integrals, even in those topologies which are not linearly reducible.For the latter, this was achieved by a suitable choice of basis integrals and a high-precision numerical evaluation plus constant fitting for a single remaining integral [54].The key observation was that certain Feynman integrals involve only the F polynomial but not the U polynomial at leading order in ϵ, and the F polynomial alone could be rendered linearly reducible in all cases. We performed the parametric integrations for the finite integrals with the Maple program HyperInt.While straightforward in principle, the integration generates a large number of terms at intermediate stages.Performance challenges arise due to bookkeeping tasks and greatest common divisor computations to combine coefficients of the same multiple polylogarithm.In order to obtain complete information at a given transcendental weight, depending on the choice of basis integrals, one needs a different number of terms in the epsilon expansion, and each such term may require significantly different amounts of computational resources.Usually, the first term in the ϵ expansion is relatively inexpensive to compute, and with increasing order, the computational complexity increases a lot.For this reason, we usually start by trying out an overcomplete list of candidate integrals and compute the leading term(s) of their ϵ expansion.We then select basis integrals, whose ϵ expansion starts with high weight and which performed well in terms of run-times for the computation of the leading term(s) in ϵ.By inserting the basis change into form factor expressions, we check for unwanted weight drops due to our choice of basis integrals.For our basis choice, more difficult topologies start to contribute only at relatively high weight. To perform the integrations at higher weight, in some cases, we used a parallelized HyperInt setup on compute nodes with hundreds of GB of main memory and weeks of runtime.In this way, we were able to analytically calculate all Feynman integrals to weight 6, all but one to weight 7 (with the last one guessed from numerical data), and a large number of integrals to weight 8.In all cases, we checked our results with precise numerical evaluations using the program FIESTA [55,56].The numerical evaluations were performed for our finite integrals, which allowed for better computational performance than generic integrals. IBP reduction of dotted integrals Our finite integrals typically have dots and are defined in d = d 0 −2ϵ dimensions, where the reference dimension d 0 in many cases is larger than four, d 0 = 6, 8, . ... In order to express them in terms of integrals with d 0 = 4 dimensions, we exploit dimension-shift identities as described e.g. in [57], which also introduces dotted integrals.In particular, we employ dimension-increasing shifts which introduce four additional dots for a four-loop integral. We establish the relation between the basis of finite integrals and a conventional basis through integration-by-parts identities, where the particular challenge lies in the reduction of the dotted integrals. The reduction scheme is based on integration-by-parts relations in the Lee-Pomeransky representation [58][59][60].Defining the (twisted) Mellin transform of a function f (x 1 , . . ., x N ) as and normalizing the integral (2.1) according to Here, U and F are the first and second Symanzik polynomials, respectively, ν = ν i , N = 18, L = 4 and N is an normalization constant which is not relevant in the following. The Mellin transform (3.1) has the properties with multi-index notation such that ν = (ν 1 , . . ., ν N ), e i = (0, . . ., 0, 1, 0, . . ., 0) has a nonzero entry at position i and ∂ i ≡ ∂/∂x i .From this it is easy to see how insertions of x i and ∂ i translate to shifts of propagator powers.We use the shift operators generates from M{P G −d/2 } = 0 via the substitutions a shift relation.In fact, every shift relation is generated in this way [60]. To construct annihilators, we make ansätze of the form In the following, we will restrict ourselves to at most second order derivatives in P .The functions c 0 (x 1 , . . ., x N ), c i (x 1 , . . ., x N ) and c ij (x 1 , . . ., x N ) are polynomials in the Feynman parameters x i and are determined such that (3.8) is fulfilled, which requires These equation "templates" are then applied to "seed integrals" with non-negative integer insertions for the ν i , followed by a standard "Laporta"-style reduction of these identities for the specific loop integrals.For the latter we use the modular arithmetic and rational reconstruction methods available in Finred. We compute the syzygies sector by sector, aiming at the reduction of integrals without irreducible numerators.While we found that annihilators of linear order in the derivatives are insufficient in some cases, annihilators of second order (involving also the c ij ) allowed us to generate the desired reductions.Instead of computing complete syzygy modules, we restrict ourselves in the construction of the c 0 , c i , and c ij to a maximal degree in the Feynman parameters and employ linear algebra methods (see also [48,61]) implemented in Finred for their computation. The fact that we may ignore numerators for the sector we construct the annihilator deserves a comment.Linear annihilators produce at most a single decrementing shift operator in each term, such that no numerators will be produced for seed integrals without numerators.This is no longer the case in the presence of a second order ( î− ) 2 contribution, which can indeed lead to a subsector integral with a numerator.Interestingly, keeping also the subsector identities for a given sector, all of these auxiliary integrals can be eliminated without additional effort In practice, we note that for subsectors with fewer lines, integrals with rather large numbers of dots need to be reduced.On the other hand, the identities produced by the annihilator method can be reduced rather quickly.For the present calculation, we chose to reconstruct full reduction identities with full d dependence and rational numbers as coefficients, which required a large number of samples in some cases and thus nonnegligible computational effort.We found this approach attractive with regards to workflow considerations, since it decoupled our experiments with different types of basis changes from the computation of the reductions.By storing intermediate reductions of integrals at the level of finite field samples and by reconstructing symbolic expressions only after assembling the desired linear combinations of integrals (e.g., for a specific basis change from finite field to conventional integrals), one can work with a considerably smaller number of samples and decrease the computational effort. Results in electronic files We provide analytical results for the complete set of all massless, four-loop, three-point master integrals with one off-shell leg at https://www.ttp.kit.edu/preprints/2023/ttp23-034/ .Please see the file README for details regarding the employed conventions and for a description of the various files. Our analytical results for the vertex integrals with one off-shell leg are given as Laurent expansions in ϵ and allow quantities to be computed at least up to and including weight 8, in many cases up to and including weight 9. Results obtained by the method of differential equations are given in a UT basis (strictly speaking, the UT property is a conjecture at higher orders in ϵ).The complete set of all master integrals is given in terms of finite integrals, which allows for easier numerical checks.In addition, we provide mappings to a more conventional "Laporta basis", determined by a generic ordering of integrals. We also provide results for the vertex integrals with two off-shell legs, which we used to employ the method of differential equations.In particular, we define basis integrals, which lead to ϵ-factorized differental equations.Moreover, we also provide the differential equations themselves. For the calculation of some (physical) quantity to weight 8 using some non-UT basis, it may seem that one needs information from higher order terms in the ϵ expansion that are not provided here.To work around this problem, we introduced tags in the expansions representing specific unknown higher weight contributions.By expanding to sufficiently high order in ϵ and making sure that these tags drop out in the final result, one can still use such an alternative basis. Conclusion We presented solutions for all four-loop master integrals contributing to massless vertex functions with one off-shell leg.Our results for the Laurent expansion in the dimensional regulator ϵ are given in terms of regular and multiple zeta values and are complete up to and including at least transcendental weight eight.We provide concise definitions of all master integrals, their analytical solutions, basis transformations and further auxiliary expressions in electronic files, that can be downloaded from https://www.ttp.kit.edu/preprints/2023/ttp23-034/ . We employed two methods to obtain these results: one based on differential equations for topologies with an additional off-shell leg, one based on direct parametric integrations of finite integrals.In a large number of cases, we employed both methods to compute integrals in the same topology.Moreover, almost all integrals were checked up to weight seven by such redundant calculations.For the weight six and weight seven contributions we also had non-trivial checks available from the cusp and collinear anomalous dimensions extracted from the poles of different form factors. Various weight eight contributions have been obtained using only one of the two described methods, those we checked against precise numerical evaluations with FIESTA. We observed that it can be computationally rather inexpensive to compute lower weight contributions in the parametric integration approach, once one has a linearly reducible representation available.Furthermore, a suitable basis choice can avoid contributions from more complicated topologies at lower weight.For that reason, the weight seven contributions could essentially be obtained from direct integrations.At weight eight, the situation is different.For two particularly complicated top-level topologies we could not even find a suitable starting point, that is, a linearly reducible representation.Moreover, for a number of other topologies, we were not successful with direct integrations due to the high computational resource demands. Remarkably, in all these challenging cases, the differential equation approach worked well and allowed us to obtain analytical solutions.Despite the fact, that one generalizes the problem by taking one of the light-like legs off-shell, the power of the method outweighed this potential drawback in practice.As a bonus, the method allowed us to arrive at uniformly transcendental basis integrals, and one can obtain results also at even higher transcendental weight if needed. Figure 1 . Figure 1.Reducible and irreducible top-level topologies for four-loop form factor integrals with one off-shell leg. Figure 2 . Figure 2. One of the most complicated, non-planar diagrams for four-loop form factors.This topology has four master integrals in the top-level sector. Figure 3 . Figure 3. Integration paths C and C ′ on the complex plane of y 2 .
2023-09-04T06:42:29.469Z
2023-08-31T00:00:00.000
{ "year": 2023, "sha1": "4e4250f30b50b72e3ce2bdee74035ed9a6d30cb1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-023-12179-2.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "4e4250f30b50b72e3ce2bdee74035ed9a6d30cb1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7627237
pes2o/s2orc
v3-fos-license
Biweekly oxaliplatin, raltitrexed, 5-fluorouracil and folinic acid combination chemotherapy during preoperative radiation therapy for locally advanced rectal cancer: a phase I–II study Oxaliplatin (OXA), raltitrexed (RTX), 5-fluorouracil (FU) and folinic acid (FA) have shown activity in metastatic colorectal cancer, radioenhancing effect and synergism when combined. We evaluated a chemotherapy (CT) combination of OXA, RTX and FU/FA during preoperative radiotherapy (RT) in locally advanced rectal cancer (LARC) patients. Fifty-one patients with LARC at high risk of recurrence (T4, N+ or T3N0 ⩽5 cm from anal verge and/or circumferential resection margin ⩽5 mm) received three biweekly courses of CT during pelvic RT (45 Gy). Surgery was planned 8 weeks after CT-RT. Recommended doses (RDs) determined during phase I were utilised in the subsequent phase II trial, where the rate of tumour regression grade (TRG) 1 or 2 was the main end point. No toxic deaths occurred, and severe toxicity was easily managed. In phase II, RDs delivered in 31 patients were OXA 100 mg m−2 and RTX 2.5 mg m−2 on day 1, and FU 900 mg m−2 and LFA 250 mg m−2 on day 2. Main severe toxicities by patients were grade 4 neutropenia (23%) and grade 3 diarrhoea (19%). In 71% (95% confidence limits, 52–86%) of patients, TRG1 (13) or TRG2 (9) was obtained. All patients are alive and recurrence-free after a median follow-up of 29 months. Combination of OXA, RTX and FU/FA with pelvic RT has an acceptable toxicity and a high clinical activity in LARC and should be studied further in patients at high risk of recurrence. Total mesorectal excision (TME) has markedly improved the local control in patients with locally advanced rectal cancer (LARC) (MacFarlane et al, 1993). Moreover, the Dutch trial demonstrated that the addition of preoperative radiation therapy (RT) to TME reduced the rate of local recurrence. Nevertheless, the overall survival (OS) was not improved because RT failed to prevent distant metastases (Kapiteijn et al, 2001). Furthermore, preoperative delivery of 5-fluouracil (FU) during RT has been proven to further reduce the local recurrence and toxicity compared to postoperative approach, but it did not increase recurrence-free and OS (Sauer et al, 2004). Raltitrexed (RTX), a direct and specific TS inhibitor (Jackman et al, 1991), has shown activity in advanced colorectal cancer (Cunningham, 1998). Moreover, RTX has demonstrated radiosensitising properties in preclinical studies (Teicher et al, 1998) as well as activity when combined with preoperative or postoperative RT in LARC (Botwood et al, 2000;Gambacorta et al, 2004b). Interestingly, in vitro studies have shown a synergistic activity when RTX is followed 24 h later by FU (Longo et al, 1998;Caponigro et al, 2001), and a positive pharmacokinetic interaction between RTX and FU has been demonstrated (Schwartz et al, 2004). When folinic acid (FA) is added to FU in the combination, an even greater synergism has been observed (Longo et al, 1998). Moreover, preclinical studies have shown that the administration of FA 24 h after RTX may reduce its bone marrow and gastrointestinal toxicity (Farrugia et al, 2000), which have been the main life-threatening toxicities of RTX in a large multicentre randomised trial (Maughan et al, 2002). As a matter of fact, we have treated 53 patients with metastatic colorectal cancer with a biweekly administration of RTX on day 1, followed by FAmodulated FU on day 2, reporting no toxic deaths; severe neutropenia and diarrhoea were absolutely manageable, and a 24% response rate was obtained (Comella et al, 2000). Oxaliplatin (OXA) is a platinum derivative that has shown radiosensitising properties (Cividalli et al, 2002), additivity with RTX and synergism with FU (Raymond et al, 2002). Clinical studies have shown high response rates for the combination of OXA with either FU/FA or RTX in MCRC (Cascinu et al, 2002;Seitz et al, 2002;Goldberg et al, 2004;Comella et al, 2005). Furthermore, the addition of OXA to a biweekly regimen of RTX and FU/FA proved manageable and active in heavily pretreated patients (Comella et al, 2002). Recently, several investigators have reported encouraging results with the addition of OXA to fluoropyrimidines during pelvic preoperative radiotherapy in LARC (Gerard et al, 2003;Rodel et al, 2003;Gambacorta et al, 2004a;Aschele et al, 2005;Sebag-Montefiore et al, 2005). For all these reasons, the combination of OXA, RTX and FU/FA with pelvic RT represents an attractive perspective in the preoperative setting of LARC. In fact, besides the radio-enhancing effect, this regimen takes advantage of the synergism demonstrated between the three agents. Therefore, we performed a phase I -II trial on the biweekly combination of OXA, RTX and FU/FA during RT in patients with LARC. Eligibility criteria Eligible patients had a histologically proven previously untreated adenocarcinoma of the extra-peritoneal rectum. In the phase I study, we enrolled patients in clinical stage II or III, or requiring an abdomino-perineal resection (APR). In the phase II study, we accrued only patients at high risk of recurrence: T4, node positive or T3N0 of the lower third of the rectum and/or with circumferential resection margin (CRM) p5 mm by magnetic resonance imaging (MRI) (Beets-Tan et al, 2000). Additional inclusion criteria were Eastern Cooperative Oncology Group performance status p2, age X18 years and adequate baseline bone marrow and organ function. Main exclusion criteria were previous malignant tumour, severe heart disease, uncontrolled infection or metabolic disorders, severe neurologic or inflammatory bowel disease. This study was approved by the Independent Ethical Committee of the National Tumour Institute of Naples and a written informed consent was obtained from all patients before enrolment. Work-up Pretreatment work-up included a complete history and physical examination, a complete blood cell count (BCC) with white blood cell differential, a serum chemistry profile, carcinoembryonic antigen assay (CEA), an electrocardiogram, colorectal endoscopy with a biopsy of the tumour, transrectal ultrasonography (EUS), chest X-ray, pelvic and abdominal computed tomography (CT) and liver and pelvic MRI. Blood cell counts were also obtained 8 and 11 days after each cycle of chemotherapy, whereas serum chemistry was repeated weekly. Radiotherapy Conformal RT was delivered by a three-field box technique, consisting of a posterior -anterior and two lateral fields, using high-energy X photons (6 -20 MV) by a Varian CD 2100 linear accelerator, to a total dose of 45 Gy over 5 weeks (1.8 Gy  5 fractions/week) to the reference point according to International Commission on Radiation Units (ICRU) 50 -62. Radiation therapy was interrupted only when a severe toxicity occurred. All patients were treated in prone position, and a bellyboard was utilised to minimise the amount of small bowel in the treatment field. Clinical target volume encompassed the tumour, defined by MRI imaging, plus the total mesorectum, the common, internal iliac and obturatorial lymph nodes. Three-dimensional plan was performed with a dedicated treatment planning system after on-line CT virtual simulation and CT-MRI image fusion. We contoured the small bowel, the femoral heads and the bladder as critical organs on all CT slices of every patient (according to ICRU 50 and 62), and we evaluated the relative dose -volume histogram on the treatment planning console. A quality control was assured by a weekly portal imaging verification of all fields, and adherence to the protocol was checked by portal imaging verification and matching all the fields with digitally reconstructed radiographs. Chemotherapy Chemotherapy consisted of three biweekly cycles delivered immediately before RT on days 1 and 2 of the first, third and fifth week of RT. In the first four dose levels, patients received only RTX as short intravenous (i.v.) infusion on day 1 and levo-FA (2-h i.v. infusion) followed by FU i.v. bolus on day 2, 24 h after RTX ( Figure 1). In the next dose levels, OXA (2-h i.v. infusion) was delivered before RTX on day 1. In case of persistent grade X2 toxicity, according to the National Cancer Institute common toxicity criteria (NCI CTC-Version 2) at the time of recycling, chemotherapy was delayed for 1 -2 weeks. Otherwise, chemotherapy was permanently discontinued. A 25% FU dose reduction was applied in subsequent cycles in case of grade 4 neutropenia or anaemia, grade X3 febrile neutropenia or thrombocytopenia, grade X2 cardiac or renal toxicity or grade X3 other nonhaematologic toxicities (except for alopecia). At the second appearance of these side effects or after the first appearance of grade X3 sensory neuropathy, a 25% OXA dose reduction was also planned. Doses of RTX and levo-FA were never reduced. Surgery Patients underwent TME 8 weeks after the completion of chemoradiation. An anterior resection (AR) or an APR was performed on the basis of restaging. Intraoperative frozen sections were obtained to assess the resection margins in the case of conservative treatment. Anastomoses were protected by a loop ileostomy, and ileostomy reversal was performed after endoscopic assessment of anastomotic integrity. Pathology Pathologic examination provided a macroscopic description of the mesorectum and of the former tumour-bearing area. For residual tumour, at least four paraffin blocks were processed, and an additional large area block was embedded. If no tumour was visible, the entire suspicious area was sliced and embedded. Circumferential resection margin was examined by sampling a 1 mm thick slice, and lymph nodes were searched by manual dissection (Andreola et al, 2001a, b). All resection specimens were examined by the same experienced pathologist, according to a standardised protocol based on tumour node metastasis categories, reporting the number of examined and involved lymph nodes (including the apical node one), the CRM evaluation and the tumour regression grading (TRG). The pathologic records were reported on a standard form for all patients. Pathologic response was classified as follows: TRG1, complete response with absence of residual cancer cells and fibrosis extending through the wall (regardless of the presence of acellular mucine lakes); TRG2, presence of few residual cancer cells scattered through the fibrosis; TRG3, clear evidence of residual Biweekly OXA, RTX and FU/FA þ RT in LARC cancer cells, but with predominant fibrosis; TRG4, residual cancer cells outgrowing fibrosis; TRG5, absence of regressive changes (Mandard et al, 1994). Adjuvant chemotherapy Four months of adjuvant chemotherapy with weekly FU 370 mg m À2 and levo-FA 20 mg m À2 (QUASAR Collaborative Group, 2000) were planned only for patients with cT4 lesions, or with pN þ and/or pCRM p1 mm. Follow-up Patients were clinically assessed every 3 months for the first 2 years, every 6 months for the next 2 years and annually thereafter, with pelvic MRI, chest and abdominal CT, CEA serum level and proctoscopy. Study design Phase I Based on pharmacokinetic interactions, escalation of RTX and FU was planned only up to 3 and 900 mg m À2 , respectively (Schwartz et al, 2004). Subsequently, OXA was added and escalated ( Figure 1). Consecutive cohorts of three patients were treated at each dose level. If a dose-limiting toxicity (DLT) was observed, this cohort was expanded to six patients. If p2 patients out of six patients experienced a DLT at a given dose level, escalation proceeded. Maximum tolerated dose (MTD) was defined as the dose level at which DLTs occurred in more than one-third of patients. Recommended dose (RD) for the subsequent phase II study was defined as the dose level preceding the MTD. Doselimiting toxicities were defined as grade 4 neutropenia or anaemia; grade X3 febrile neutropenia or thrombocytopenia; grade X2 renal and cardiac toxicity; grade X3 other non-haematologic toxicity (except for alopecia); delay of more than 2 weeks in chemotherapy recycling; and any other severe adverse events. Phase II We have chosen as primary end point the achievement of a complete or nearly complete pathologic response, because it has been repeatedly reported to predict the long-term survival of patients (Bouzourene et al, 2002;Rödel et al, 2005;Vecchio et al, 2005). To define the sample size, a Simon's two-stage design was utilised (Simon, 1989), setting a and b errors as 0.05 and 0.20, and defining as minimum activity of interest (p0) a TRG1 -2 rate ¼ 30%. To accept the alternative hypothesis (p1) of a TRG1 -2 rate X50%, at least six TRG1 or 2 in the first 15 patients, and at least 19 TRG1 or 2 in a total of 46 patients had to be reported. Secondary end points were safety, progression-free survival and OS. Patient characteristics Between December 2000 and August 2004, 20 patients were enrolled in the phase I study and 31 patients were enrolled in the phase II study (Table 1). In the phase II study, 27 patients were at moderately high or high risk of recurrence according to Gunderson et al (2004). Tumour distance from the mesorectal fascia was p5 mm in 19 patients, and from the anal verge was p5 cm in 15 patients. Dose escalation and DLTs In the first four dose levels, no DLT occurred. With the addition of OXA, one of three patients showed grade 4 neutropenia after the second cycle of chemotherapy. Therefore, three more patients were treated with this dose level. Of these last patients, a 65-year-old women suffered from grade 3 diarrhoea, which eventually recovered allowing for treatment completion. At the sixth dose level, two of two patients experienced a DLT: a 77-year-old man had a delay of more than 2 weeks after the first cycle, caused by grade 3 neutropenia, and a 58-year-old man presented grade 3 colitis after the third cycle. Consequently, the previous dose level was considered the RDs ( Table 2). Toxicity of the phase II study Acute toxicities are listed in Table 3. Stomatitis, diarrhoea and neutropenia were the only grade 3 or 4 toxicities. Grade 3 stomatitis occurred in one (3%) patient, whereas six (19%) patients experienced grade 3 diarrhoea. Neutropenia was of grade 3 in five (16%) patients and of grade 4 in seven (23%) patients (febrile in two patients). Only one patient required hospitalisation for diarrhoea and concomitant grade 4 febrile neutropenia, and he did not complete RT (cumulative dose, 41.5 Gy). In all other cases, toxicities were easily managed, and resolved in 2 -4 days. Grade 1 peripheral neuropathy occurred in five (16%) patients. Eight of 93 In two patients treated in the phase I and in three patients in the phase II studies, MRI was not performed because of metal prosthesis in four and metal stitches in the other. Biweekly OXA, RTX and FU/FA þ RT in LARC cycles of chemotherapy were omitted for toxicity (n ¼ 6) or refusal (n ¼ 2), with an RT and chemotherapy compliance of 97 and 91%, respectively. A 25% FU dose reduction was needed in 11 patients. Activity Phase I All 20 patients underwent R0 resection, but liver metastases were diagnosed intraoperatively in two patients. A TRG1 was obtained in six (30%) patients and a TRG2 in two (10%) patients. Phase II All patients had a TME with complete mesorectum, and the median number of retrieved nodes was 29 (range 10 -80). Twenty-nine (93%) patients had an R0 resection, because CRM was p1 mm in two patients. Anterior resection was performed in 25 patients, and a sphincter-preserving procedure could be performed in nine of 15 (60%) patients with tumour located p5 cm from the anal verge. In two patients, APR was mandatory because of baseline sphincter infiltration. Twenty-five patients (81%) obtained a T downstaging (four of six cT4, 19 of 23 cT3 and two of two cT2). Nodal downstaging was detected in 23 of 28 (82%) patients. Positive lymph nodes were detected in only five (16%) patients, one of whom with a single micrometastatic focus. No patient showed an increase of T or N stage. Pathologic evaluation showed a TRG1 in 13 (42%) patients and a TRG2 in nine (29%) patients. Therefore, a TRG1 or 2 was reported in 71% (95% confidence limits, 52 -86%) of patients, with no significant correlation with baseline clinical characteristics (Table 4). Follow-up As of November 2005, after a median follow-up of 53 (range 41 -59) months, four patients recruited in the phase I study (9) 6 (67) 3 (33) TNM ¼ tumour node metastasis. a MRI was not performed in three patients, for the presence of metal prosthesis (two patients) and metal stitches (one patient). Biweekly OXA, RTX and FU/FA þ RT in LARC A Avallone et al had a distant recurrence (only one had received OXA) and four patients had died (two patients for cancer-unrelated causes). All 31 patients of the phase II study are alive and recurrence-free after a median follow-up of 29 (range 16 -41) months. DISCUSSION The results of a recent randomised trial, combining FU with local RT, have further supported preoperative vs postoperative treatment of LARC, but have also raised several issues (Sauer et al, 2004). In particular, the lack of reduction of distant metastases in this trial strongly supports the exploitation of more active cytotoxic regimens. Moreover, the risk of overestimating the tumour's degree of penetration with EUS points to the need for a more accurate staging of patients. Furthermore, retrospective data have suggested that a combined modality therapy could represent an overtreatment in patients at low risk of local failure after TME alone, such as some tumours staged as T3N0 (Willett et al, 1999). In this setting, MRI plays a key role, because it can predict the CRM involvement (Beets-Tan et al, 2000), so defining the patients at higher risk of local recurrence when treated with TME plus RT (Marijnen et al, 2003), although it has been demonstrated less sensitive and specific at identifying nodal disease and vascular invasion (Brown et al, 2003;Branagan et al, 2004). Our phase I -II study was carried out in patients with LARC at high risk of recurrence, as identified by both EUS and MRI. The aim of the phase I study was to determine the MTDs of the combination of RTX, FU/FA and OXA combined with pelvic RT. Raltitrexed and FU were escalated only up to 3 and 900 mg m À2 , respectively, because a pharmacokinetic analysis has demonstrated that pre-administration of RTX (X2.5 mg m À2 ) 24 h before FU (900 mg m À2 ) significantly increased its C max and AUC (Schwartz et al, 2004). A grade 3 neutropenia was the only severe toxicity that occurred in the first four dose levels. The safety of the administration of RTX plus FU during RT could likely be explained by the concurrent delivery of FA. Indeed, preclinical studies have shown that the administration of FA 24 h after RTX not only reduced its toxicity (Farrugia et al, 2000) but also increased the synergism between RTX and FU (Longo et al, 1998). When OXA was added to the combination, the RDs for the phase II study were OXA 100 mg m À2 , RTX 2.5 mg m À2 , FU 900 mg m À2 and LFA 250 mg m À2 , administered every 2 weeks. Notably, these dosages are similar or even greater than those usually utilised in the treatment of metastatic colorectal cancer patients without RT (Cascinu et al, 2002;Seitz et al, 2002;Goldberg et al, 2004;Comella et al, 2005). Neutropenia and diarrhoea were the most frequent grade 3 or 4 adverse events occurring in the phase II study. However, we would underline that in our experience, the occurrence of grade X3 neutropenia was similar to that reported in the MOSAIC trial with FOLFOX4 regimen (André et al, 2004), and severe diarrhoea was not higher than that seen in the CAO/AIO/ARO-94 trial with preoperative RT plus FU (Sauer et al, 2004). Moreover, in all but one case, toxicity resolved in 2 -4 days without hospitalisation. Neurotoxicity was negligible and scored as grade 1 only. The favourable toxicity profile of this combined treatment was reflected by its high compliance and by the acceptable surgical morbidity, with only one patient requiring re-operation. In the phase II study, we have analysed the pathologic response using the TRG score, because it has been shown to be more accurate in defining the tumour regression after primary therapy, and to predict the long-term outcome (Bouzourene et al, 2002;Rödel et al, 2005;Vecchio et al, 2005). Notably, the number of TRG1 or 2 required by our statistical design had already been reached in the first 31 treated patients, with an overall activity (71%) by far greater than that hypothesised. In addition, such activity has never been reported with preoperative combination chemotherapy and RT (Gerard et al, 2003;Rodel et al, 2003;Mehta et al, 2003;Gambacorta et al, 2004a;Aschele et al, 2005;Sebag-Montefiore et al, 2005). The 95% confidence limits of these results (52 -86%) strongly support the conclusion that this approach could be effective in at least half of treated patients. Moreover, it should be stressed that the complete or nearly complete pathologic response rate was obtained in high-risk patients (Phang et al, 2002;Gunderson et al, 2004). Interestingly, the achievement of TRG1 -2 was independent of baseline characteristics. The high activity of this treatment was reflected by the per cent of R0 resection (93%), as well as tumour and nodal downstaging (81 and 82%, respectively). Moreover, 84% of patients showed no nodal involvement, no T or N upstaging was observed, and sphincter preservation was obtained in 60% of patients with low-lying tumours. One could argue whether a similar activity could be obtained without the inclusion of RTX in the combination. Indeed, several recent phase I -II trials have explored the activity of a combination of OXA and fluoropyrimidines (i.v. FU or oral capecitabine) during preoperative pelvic RT (Gerard et al, 2003;Rodel et al, 2003;Aschele et al, 2005;Sebag-Montefiore et al, 2005). Although all these trials have demonstrated the feasibility of such combinations, we would underline that the reported pCR rates (ranging between 7 and 28%) were clearly lower than that achieved in our study. Notably, in all these studies, the OXA dose intensity during radiotherapy ranged from 43 to 60 mg m À2 per week, which is similar to that we were able to deliver with our three-drug combination. On the other hand, a phase II study integrating three cycles of OXA and TOM (without FU/FA) with pelvic RT has reported a pCR in 9 out of 30 (30%) LARC patients (Gambacorta et al, 2004a). However, it should be noted that only patients with a limited disease extent (not including T4) were treated in this study. On the contrary, our treatment yielded a 42% TRG1 in an unfavourable patient population. Furthermore, we would point out the accurate pathologic evaluation we performed, which allows a critical assessment of the primary treatment, and may facilitate interstudy comparison. Indeed, an accurate pathologic analysis allows one to assess the CRM involvement, the number of examined and involved lymph nodes and the quality of surgical performance, providing valuable data that have an impact on outcome (Tepper et al, 2001;Nagtegaal et al, 2002;Mawdsley et al, 2005). It is important to stress that in our study, the resection of mesorectum was complete in all patients, CRM was p1 mm in only two patients and a median number of 29 nodes was retrieved. Finally, it is remarkable that after a median follow-up of 29 months, all patients of the phase II study are alive and recurrencefree. Of note, only 11 of these patients have received post-resection adjuvant FU/FA. In conclusion, this study demonstrates that the combination of OXA, RTX and FU/FA with pelvic RT in LARC patients has an acceptable toxicity and an excellent treatment compliance. Moreover, it provides evidence of high clinical activity in patients selected for high risk of recurrence. On these bases, we have decided to complete patient accrual up to the planned sample size, testing a slight dose reduction of FU (800 mg m À2 ) to further improve the safety of this combined treatment.
2014-10-01T00:00:00.000Z
2006-05-30T00:00:00.000
{ "year": 2006, "sha1": "fa24c5f2cb77ec2ab3409ec2d39e7447e85640e3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6603195.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fa24c5f2cb77ec2ab3409ec2d39e7447e85640e3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119129066
pes2o/s2orc
v3-fos-license
The Finite Basis Problem for Kauffman Monoids We prove a sufficient condition under which a semigroup admits no finite identity basis. As an application, it is shown that the identities of the Kauffman monoid $\mathcal{K}_n$ are nonfinitely based for each $n\ge 3$. This result holds also for the case when $\mathcal{K}_n$ is considered as an involution semigroup under either of its natural involutions. The relations (1)-(3) are 'multiplicative', i.e., they do not involve addition. This observation suggests introducing a monoid whose monoid algebra over R could be identified with TL n (δ). A tiny obstacle is the presence of the scalar δ in (3), but it can be bypassed by adding a new generator c that imitates δ. This way one comes to the monoid K n with n generators c, h 1 , . . . , h n−1 subject to the relations (1), (2), and the relations which both mimic (3) and mean that c behaves like a scalar. The monoids K n are called the Kauffman monoids 1 after Kauffman [1990] who independently invented these monoids as geometric objects. It turns out that Kauffman monoids play a major role in several 'fashionable' parts of mathematics such as knot theory, low-dimensional topology, topological quantum field theory, quantum groups etc. As algebraic objects, these monoids belong to the family of so-called diagram or Brauer-type monoids that originally arose in representation theory and gained much attention recently among semigroup theorists. In particular, the first-named author (solo and with collaborators) has considered universal-algebraic aspects of some monoids from this family such as the finite basis problem for their identities or the identification of the pseudovarieties generated by certain series of such monoids [see, e.g., Auinger, 2014;Auinger, Dolinka and Volkov, 2012b]. In the present paper we follow this line of research and investigate the finite basis problem for the identities holding in Kauffman monoids. Whilst it is not clear whether or not a study of the identities of Kauffman monoids may be of any use for any of their non-algebraic applications, such a study constitutes an interesting challenge from the algebraic viewpoint since-in contrast to other types of diagram monoids-Kauffman monoids are infinite. We recall that there exist several powerful methods to attack the finite basis problem for finite semigroups (see the survey [Volkov, 2001] for an overview), but, to the best of our knowledge, so far the problem has been solved for only one natural family of concrete infinite semigroups that contains semigroups satisfying a nontrivial identity, namely, for non-cyclic one-relator semigroups and monoids [Shneerson, 1989]. Here we prove that, for each n ≥ 3, the identities of the monoid K n are not finitely based. The monoid K 2 is commutative, and thus, its identities are finitely based. The paper is structured as follows. In Section 1 we present geometric definitions for some classes of diagram monoids including Kauffman monoids and so-called Jones monoids. We also summarize properties of Kauffman and Jones monoids which are essential for the proofs of our main results. Section 2 contains a new sufficient condition under which a semigroup admits no finite identity basis. In Section 3 this condition is applied to the monoid K n with n ≥ 3, thus showing that the identities of K n are nonfinitely based; we also observe that the same result holds also for the case when K n is considered as an involution semigroup under either of its natural involutions. Besides that, we demonstrate a further application of our sufficient condition. The fact that the identities of K n with n ≥ 4 are nonfinitely based was announced by the last-named author in his invited lecture at the 3rd Novi Sad Algebraic Conference held in August 2009. Slides of this lecture 2 included an outline of the proof for n ≥ 4 as well as an explicit mentioning that the case n = 3 was left open. This case has been recently analyzed by the first-named author and, independently and by completely different methods, by the three 'middle-named' authors of the present paper: it turns out that also the identities of K 3 are nonfinitely based. Naturally, the authors have decided to join their results into a single article, and so the present paper has been originated. The unified proof presented here is based on the approach by the first-named and the last-named authors. The alternative approach by the three 'middle-named' is of a syntactic flavor; it also has some further applications and will be published in a separate paper. Diagrams and their multiplication The primary aim of this section is to present a geometric definition for a series of diagram monoids which we call the wire monoids W n , n ≥ 2. Each Kauffman monoid K n can be identified with a natural submonoid of the corresponding wire monoid W n so that a geometric definition for the Kauffman monoids appears as a special case. The reader should be advised that even though this geometric definition certainly clarifies the nature of Kauffman monoids and is crucial to their connections to various parts of mathematics, knowing it is not really necessary for understanding the proofs in the present paper. Therefore those readers who are mainly interested in the finite basis problem for K n may skip the 'geometric part' of this section and rely on the definition of Kauffman monoids in terms of generators and relations as stated in the introduction and on a similar definition of Jones monoids given at the end of the section. We fix an integer n ≥ 2 and define the wire monoid W n . Let be two disjoint copies of the set of the first n positive integers. The base set of W n is the set of all pairs (π; d) where π is a partition of the 2n-element set [n] ∪ [n] ′ into 2-element blocks and d is a non-negative integer referred to as the number of circles. Such a pair is represented by a wire diagram as shown in Figure 1. We draw a rectangular 'chip' with 2n 'pins' and represent the elements of [n] by pins on the left hand side of the chip (left pins) while the elements of [n] ′ are represented by pins on the right hand side of the chip (right pins). Usually we omit the numbers 1, 2, . . . in our illustrations. Now, for (π; d) ∈ W n , we represent the number d by d closed curves ('circles') drawn somewhere within the chip and each block of the partition π is represented by a line referred to as a wire. Thus, each wire connects two pins; it is called an ℓ-wire if it connects two left pins, an r-wire if it connects two right pins, and a t-wire if it connects a left pin with a right pin. The wire diagram in Figure 1 corresponds to the pair Next we explain the multiplication in W n . Pictorially, in order to multiply two chips, we 'shortcut' the right pins of the first chip with the corresponding left pins of the second chip. Thus we obtain a new chip whose left (respectively, right) pins are the left (respectively, right) pins of the first (respectively, second) chip and whose wires are sequences of consecutive wires of the factors, see Figure 2. All circles of the factors are inherited by the product; in addition, some extra circles may arise from r-wires of the first chip combined with ℓ-wires of the second chip. In more precise terms, if ξ = (π 1 ; d 1 ), η = (π 2 ; d 2 ), then a left pin p and a right pin q ′ of the product ξη are connected by a t-wire if and only if one of the following conditions holds: • p u ′ is a t-wire in ξ and u q ′ is a t-wire in η for some u ∈ [n]; • for some s > 1 and some are r-wires in ξ. An analogous characterization holds for the ℓ-wires and r-wires of the product. Each extra circle of ξη corresponds to a sequence u 1 , v 1 , . . . , u s , v s ∈ [n] with s ≥ 1 and pairwise distinct u 1 , v 1 , . . . , u s , v s such that all u i v i are ℓ-wires in η, while all v ′ i u ′ i+1 and v ′ s u ′ 1 are r-wires in ξ. It easy to see that the above defined multiplication in W n is associative and that the chip with 0 circles and the horizontal t-wires 1 1 ′ ,. . . ,n n ′ is the identity element with respect to the multiplication. Thus, W n is a monoid; W n also admits two natural unary operations. The first of them geometrically amounts to the reflection of each chip along its vertical symmetry axis. To formally introduce this reflection, consider the permutation * on [n] ∪ [n] ′ that swaps primed with unprimed elements, that is, set Then define (π; d) * := (π * ; d), where π * := {x * , y * } | {x, y} is a block of π . It is easy to verify that ξ * * = ξ, (ξη) * = η * ξ * for all ξ, η ∈ W n , hence the operation ξ → ξ * is an involution of W n . The second unary operation on W n rotates each chip by the angle of 180 degrees. To define it formally, let and define the unary operation ρ : W n → W n by ξ ρ := αξ * α. Since α * = α and α 2 = 1, we get that ξ → ξ ρ is also an involution on W n . We refer to the involutions * and ρ as the reflection and respectively the rotation. Kauffman [1990] defined the connection monoid C n as the submonoid of the wire monoid W n consisting of all elements of W n that have a representation as a chip whose wires do not cross. He has shown that C n is generated by the hooks h 1 , . . . , h n−1 , where and the circle c := {j, j ′ } | for all j = 1, . . . , n ; 1 , see Figure 3 for an illustration. It is immediate to check that the generators h 1 , . . . , h n−1 , c satisfy the relations (1), (2), and (4), whence there exists a homomorphism . . . from the Kauffman monoid K n onto the connection monoid C n . In fact, this homomorphism turns out to be an isomorphism between K n and C n ; a proof was outlined in [Kauffman, 1990] and presented in full detail in [Borisavljević, Došen and Petrić, 2002]. Observe that the set {h 1 , . . . , h n−1 , c} is closed under both the reflection and the rotation in W n : the reflection fixes each generator, while the rotation fixes c and maps h i to h n−i for each i = 1, . . . , n − 1. Therefore, the submonoid C n generated by {h 1 , . . . , h n−1 , c} is also closed under these involutions that, of course, transfer to the isomorphic monoid K n , as well. The reader who prefers to have a 'picture-free' definition of the two involutions in Kauffman monoids may observe that the relations (1), (2), and (4) are left-right symmetric: each of these relations coincides with its mirror image. Therefore, the map that fixes each generator of the monoid K n uniquely extends to an involution of K n ; clearly, this extension is nothing but the reflection * , and this gives a purely syntactic definition of the latter. In a similar way, one can give a syntactic definition of the rotation ρ : it is a unique involutary extension of the map that fixes c and swaps h i and h n−i for each i = 1, . . . , n − 1. Since the involutions ξ → ξ * and ξ → ξ ρ (especially the first one) are essential for many applications of Kauffman monoids, we find it appropriate to extend our study of the finite basis problem for the identities holding in K n also to their identities as algebras of type (2,1), with the reflection or the rotation in the role of the unary operation. The corresponding question was stated in the last-named author's lecture mentioned in the introduction; here we will give a complete answer to it. Let us return for a moment to the wire monoid W n . Denote by B n the set of all 2n-pin chips without circles, in other words, the set of all partitions of [n] ∪ [n] ′ into 2-element blocks. Observe that this set is finite. We define the multiplication of two chips in B n as follows: we multiply the chips as elements of W n and then reduce the product to a chip in B n by removing all circles. This multiplication makes B n a monoid known as the Brauer monoid: the monoids B n were introduced by Brauer [1937] as vector space bases of certain associative algebras relevant in representation theory and thus became the historically first species of diagram monoids. We stress that even though the base set of B n has been defined as a subset in the base set of W n , it is not true that B n forms a submonoid of W n . On the other hand, it is easy to see that the 'forgetting' map ϕ : W n → B n defined by ϕ(π; d) = π is a surjective homomorphism (the homomorphism just forgets the circles of its argument). Clearly, both the reflection and the rotation respect B n as a set and behave as anti-isomorphisms with respect to multiplication in B n . Thus, B n forms an involution monoid under each of these unary operations; moreover, the homomorphism ϕ is compatible with both involutions * and ρ . We summarize and augment the above information about the wire monoids and the Brauer monoids in the following lemma. Lemma 1. For each n ≥ 2, the map ϕ : (π; d) → π is a homomorphism from the monoid W n onto the finite monoid B n ; the homomorphism respects both involutions * and ρ . For every idempotent in B n , its inverse image under ϕ is a commutative subsemigroup in W n . Proof. It remains to verify the last claim of the lemma. By the definition of ϕ, for each π ∈ B n , its inverse image under ϕ coincides with the set If π 2 = π in the Brauer monoid, then the product (π; 0)(π; 0) in the wire monoid belongs to Π whence (π; 0)(π; 0) = (π; m) for some nonnegative integer m. Now if we multiply two arbitrary elements (π; k), (π; ℓ) ∈ Π, we get (π; k + ℓ + m) independently of the order of the factors. The Jones monoid 3 J n can be defined as the submonoid of the Brauer monoid B n consisting of all elements of B n that have a representation as a chip whose wires do not cross. Thus, J n relates to B n precisely as the Kauffman monoid K n (in its incarnation as the connection monoid C n ) relates to the wire monoid W n . Alternatively, one can define the Jones monoid as the image of the Kauffman monoid under the restriction of the 'forgetting' homomorphism ϕ to the latter. Clearly, J n is closed under * and ρ and forms an involution monoid with respect to each of these operations. The following scheme summarizes the relations between the four species of diagram monoids introduced so far: The vertical arrows here stand for embeddings, the horizontal ones for surjections, and all maps respect multiplication and both involutions. The following fact is just a specialization of Lemma 1. Lemma 2. For each n ≥ 2, the map ϕ : (π; d) → π is a homomorphism from the monoid K n onto the finite monoid J n ; the homomorphism respects both involutions * and ρ . For every idempotent in J n , its inverse image under ϕ is a commutative subsemigroup in K n . As promised at the beginning of this section, we conclude with showing how one may bypass geometric considerations and define the Jones monoid in terms of generators and relations. Since the monoid J n is the image of K n under ϕ, it is generated by the hooks h 1 , . . . , h n−1 and the following relations hold in J n : 3 The name was suggested by Lau and FitzGerald [2006] to honor the contribution of V.F.R. Jones to the theory [see, e.g., Jones, 1983, Section 4]. In fact, it can be verified [Borisavljević, Došen and Petrić, 2002] that the monoid generated by h 1 , . . . , h n−1 subject to the relations (5), i.e., the monoid that spans the Temperley-Lieb algebra TL n (δ) with δ = 1, is isomorphic to J n . Thus, one can define J n by this presentation. Lemma 2 can be then recovered as follows. The homomorphism ϕ : K n ։ J n arises in this setting as a unique homomorphic extension of the map that sends the generators h 1 , . . . , h n−1 of K n to the generators of J n with the same names and 'erases' the generator c by sending it to 1; the fact that such an extension exists and enjoys all properties registered in Lemma 2 readily follows from the close similarity between the relations (1), (2), (4) on the one hand and the relations (5) on the other hand. The only claim in Lemma 2 which is not that apparent with this definition of J n is the finiteness of the monoid. This indeed requires some work [see Borisavljević, Došen and Petrić, 2002, for details]. From the diagrammatic representation it can be easily calculated that the cardinality of J n is the n-th Catalan number 1 n + 1 2n n . For further interesting results concerning the monoids K n , J n and similarly defined ones the reader may consult [Došen and Petrić, 2003]. A sufficient condition for the non-existence of a finite basis We assume the reader's familiarity with basic concepts of the theory of varieties [see, e.g., Burris and Sankappanavar, 1981, Chapter II] and of semigroup theory [see, e.g., Clifford and Preston, 1961, Chapter 1]. We aim to establish a condition for the nonfinite basis property that would apply to both 'plain' semigroups and semigroups with involution as algebras of type (2,1). The two cases have much in common, and we use square brackets to indicate adjustments to be made in the involution case. First, let us formally introduce involution semigroups. An algebra S = S, ·, ⋆ of type (2,1) is called an involution semigroup if S, · is a semigroup (referred to as the semigroup reduct of S) and the identities (xy) ⋆ ≏ y ⋆ x ⋆ and (x ⋆ ) ⋆ ≏ x hold, in other words, if the unary operation x → x ⋆ is an involutory antiautomorphism of S, · . The free involution semigroup FI(X) on a given alphabet X can be constructed as follows. Let X := {x ⋆ | x ∈ X} be a disjoint copy of X. Define (x ⋆ ) ⋆ := x for all x ⋆ ∈ X. Then FI(X) is the free semigroup (X ∪ X) + endowed with the involution defined by for all x 1 , . . . , x m ∈ X ∪ X. We refer to elements of FI(X) as involutory words over X while elements of the free semigroup X + will be referred to as plain words over X. If an involution semigroup T = T, ·, ⋆ is generated by a set Y ⊆ T , then every element in T can be represented by an involutory word over Y and thus by a plain word over Y ∪ Y where Y = {y ⋆ | y ∈ Y }. Hence the reduct T, · is generated by the set Y ∪ Y ; in particular, T is finitely generated if and only if so is T, · . Recall that an algebra is said to be locally finite if each of its finitely generated subalgebras is finite. From the above remark, it follows that an involution semigroup S = S, ·, ⋆ is locally finite if and only if so is S, · . We denote by L the class of all locally finite semigroups. A variety of [involution] semigroups is locally finite if all its members are locally finite. Given a class K of [involution] semigroups, we denote by var K the variety of [involution] semigroups it generates; if K = {S}, we write var S rather than var{S}. Let A and B be two subclasses of a fixed class C of algebras. The Mal'cev product A m B of A and B (within C) is the class of all algebras C ∈ C for which there exists a congruence θ such that the quotient algebra C/θ lies in B while all θ-classes that are subalgebras in C belong to A. Note that for a congruence θ on a semigroup S, a congruence class sθ forms a subsemigroup of S if and only if the element sθ is an idempotent of the quotient S/θ. Of essential use will be a powerful result by Brown [1968Brown [ , 1971] that can be stated in terms of the Mal'cev product as follows. Proposition 3 ( [Brown, 1968[Brown, , 1971). L m L = L where the Mal'cev product is considered within the class of all semigroups. Let x 1 , x 2 , . . . , x n , . . . be a sequence of letters. The sequence {Z n } n=1,2,... of Zimin words is defined inductively by Z 1 := x 1 , Z n+1 := Z n x n+1 Z n . We say that a word v is an [involutory] isoterm for a class C of semigroups [with involution] if the only [involutory] word v ′ such that all members of C satisfy the [involution] semigroup identity v ≏ v ′ is the word v itself. If a semigroup S satisfies the identities x 2 y ≏ x 2 ≏ yx 2 , then S has a zero and the value of the word x 2 in S under every evaluation of the letter x in S is equal to zero. Having this in mind, we use the expression x 2 ≏ 0 as an abbreviation for the identities x 2 y ≏ x 2 ≏ yx 2 . The last ingredient that we need comes from [Sapir, 1987, Proposition 3] for the plain case and from [Auinger, Dolinka and Volkov, 2012a, Corollary 2.6] for the involution case. In the following we shall present a specialization of Proposition 4 by presenting a sufficient condition for a variety V to satisfy condition (i). An essential step towards this result is the next lemma whose proof is a refinement of one of the crucial arguments in [Sapir and Volkov, 1994]. Here Com denotes the variety of all commutative semigroups. Lemma 5. Let T be a semigroup in Com m L and let I be the ideal of T generated by {t 2 | t ∈ T}. Then the Rees quotient T/I is locally finite. Proof. Let α be a congruence on T such that T/α is locally finite and the idempotent α-classes are commutative subsemigroups of T. Let ρ I be the Rees congruence of T corresponding to the ideal I and β = α ∩ ρ I . We have the following commutative diagram in which all homomorphisms are canonical projections. T T/β T/α T/ρ I = T/I Recall that a semigroup is said to be periodic if each of its one-generated subsemigroups is finite. The semigroup T/α is locally finite and thus periodic. Moreover, since the restrictions of α and β to the ideal I coincide, we have I/α = I/β whence I/β is periodic, as well. Since for each element of T/β, its square belongs to I/β, it follows that T/β is also periodic, and so is each subsemigroup of T/β. Now let A ∈ T/α be an idempotent α-class; by assumption, A is a commutative subsemigroup of T. Then the inverse image of A (considered as an element of T/α) under the canonical projection T/β ։ T/α is the subsemigroup A/β of T/β, and this subsemigroup is at the same time commutative and periodic. It is well known (and easy to verify) that every commutative periodic semigroup is locally finite. We see that the congruence α/β on T/β satisfies the two conditions: (a) the quotient (T/β)/(α/β) ∼ = T/α is locally finite and (b) the α/β-classes which are subsemigroups are locally finite. By Proposition 3, T/β is itself locally finite, and so is its quotient T/I . For two semigroup varieties V and W, their Mal'cev product V m W within the class of all semigroups may fail to be a variety but it is always closed under forming subsemigroups and direct products [see Mal'cev, 1967, Theorems 1 and 2]. Therefore the variety var(V m W) generated by V m W is comprised of all homomorphic images of the members of V m W. We are now in a position to formulate and to prove our main result. Proof. By Proposition 4, it suffices to verify that all members of V satisfying x 2 ≏ 0 are locally finite. Since an involution semigroup is locally finite if and only if so is its semigroup reduct, it suffices to do so for the semigroup reducts of the members of V. Let W be a locally finite semigroup variety as per condition (i). We need to check that each semigroup S ∈ var(Com m W) which satisfies x 2 ≏ 0 is locally finite. As we observed prior to the formulation of the theorem, S is a homomorphic image of a semigroup T ∈ Com m W; let ϕ stand for the corresponding homomorphism. Consider the ideal I in T generated by {t 2 | t ∈ T}. Then I ⊆ 0ϕ −1 , and therefore, the homomorphism ϕ factors through T/I which is locally finite by Lemma 5. Consequently, S is also locally finite. Remark 1. It follows immediately from the proof of Lemma 5 that Theorem 6 remains valid if we replace the variety Com of all commutative semigroups by an arbitrary semigroup variety all of whose periodic members are locally finite. Remark 2. For a locally finite [involution] semigroup variety V, condition (i) is trivially satisfied with W = V. In this case, condition (ii) is sufficient for V to be nonfinitely based; moreover, V then is even inherently nonfinitely based, i.e., it is not contained in any finitely based locally finite variety. The corresponding result is captured by [Sapir, 1987] for plain semigroups and by [Auinger, Dolinka and Volkov, 2012a] for involution semigroups. It follows that the novelty in the present paper, though not always explicitly mentioned, is about infinite [involution] semigroups, or, to be more precise, [involution] semigroups which do not generate a locally finite variety. Remark 3. Proposition 4 and therefore Theorem 6 formulate, in fact, sufficient conditions that the variety in question be not only nonfinitely based but even be of infinite axiomatic rank, that is, there is no basis for the equational theory that uses only finitely many variables. Consequently, in all our applications, the respective [involution] semigroups are also not only nonfinitely based but even of infinite axiomatic rank. This is worth registering because an infinite [involution] semigroup can be nonfinitely based but of finite axiomatic rank. Remark 4. If two given varieties X and Y of [involution] semigroups satisfy X ⊆ Y, and Y satisfies condition (i) while X satisfies condition (ii), then all varieties V such that X ⊆ V ⊆ Y satisfy both conditions, and therefore, are nonfinitely based. Stated this way, Theorem 6 may be used to produce intervals consisting entirely of nonfinitely based varieties in the lattice of [involution] semigroup varieties. We conclude this section with an example of such an application. For two varieties V and W, we denote by V ∨ W their join, i.e., the least variety containing both V and W. Sapir and Volkov [1994] proved that for each locally finite semigroup variety W which contains the variety B of all bands (idempotent semigroups), the join Com ∨ W is nonfinitely based. More precisely, in Sapir and Volkov [1994] it is shown that each Zimin word is an isoterm relative to Com ∨ B and each member of Com ∨ W which satisfies the identity x 2 ≏ 0 is locally finite (the latter by an argument that has been refined in the proof of Lemma 5). By Theorem 6 it follows that each variety V for which Com ∨ B ⊆ V ⊆ var(Com m W) is nonfinitely based. Notice that Com ∨ W ⊆ var(Com m W) so that the quoted result from [Sapir and Volkov, 1994] appears as special case. One can obtain an analogous result for involution semigroups if B is replaced by the variety B ⋆ of all bands with involution and commutative semigroups are considered to be equipped with trivial involution (for the verification that Zimin words are involutory isoterms relative to Com ∨ B ⋆ one can use Lemma 8 formulated in the next section). Applications For every n there is an injective semigroup homomorphism K n ֒→ K n+1 (induced by the map c → c, h i → h i for i = 1, . . . , n−1) which is compatible with the reflection. Consequently, for every n we have the inclusion var K n ⊆ var K n+1 . As mentioned earlier, K n ≤ W n whence var K n ⊆ var W n for every n. These inclusions are true if the respective structures are considered either as semigroups or as involution semigroups with respect to the reflection. We start with applying Theorem 6 to the Kauffman monoids K n and the wire monoids W n with n ≥ 3. Theorem 7. Let n ≥ 3 and consider K 3 and W n either as semigroups or as involution semigroups with respect to reflection. Then every [involution] semigroup variety V such that var K 3 ⊆ V ⊆ var W n is nonfinitely based. Proof. We invoke Theorem 6 in the form of Remark 4 and show that var W n satisfies (i) and var K 3 satisfies (ii). Thus, we are to check that the semigroup W n belongs to the Mal'cev product of Com with a locally finite semigroup variety and that each Zimin word is an [involutory] isoterm relative to K 3 . The first claim readily follows from Lemma 1. Indeed, by this lemma there is a homomorphism ϕ : W n ։ B n with the property that for every idempotent in B n , its inverse image under ϕ is a commutative subsemigroup in W n . This immediately yields that W n belongs to the Mal'cev product Com m var B n , and var B n is locally finite as the variety generated by a finite algebra [see Burris and Sankappanavar, 1981, Theorem 10.16]. In order to show that Zimin words are isoterms relative to K 3 , consider the ideal C of K 3 generated by c. Clearly, If we denote the images of h 1 and h 2 in the Rees quotient K 3 /C by a and b respectively, then the relations of K 3 translate into the following relations for a and b: a 2 = 0, b 2 = 0, aba = a, bab = b. These relations define the 6-element Brandt monoid B 1 2 (in the class of all monoids with 0). Thus, K 3 /C satisfies the relations of B 1 2 and the Rees and ℓ = 4. For ℓ = 4 this follows from the analogous fact for the Jones monoid J 4 considered as an involution semigroup under rotation (this fact has been shown in [Auinger, Dolinka and Volkov, 2012b, Theorem 2.13]); by Lemma 2 the latter monoid is a quotient of K ρ 4 . It remains to consider the case ℓ = 3. We do not know whether or not TSL belongs to the variety var K ρ 3 hence we do not know if we can proceed as in the proof of Theorem 7. Nevertheless, we will show that each Zimin word is an involution isoterm relative to K ρ 3 . Arguing by contradiction, assume that for some n and some involutory word w, the identity Z n ≏ w holds in K ρ 3 . First we observe that each letter x i , i = 1, 2, . . . , n, occurs the same number of times in Z n and w. For this, we substitute c for x i and 1 for all other letters. The value of the word Z n under this substitution is c 2 n−i since it is easy to see that x i occurs 2 n−i times in Z n . Similarly, since c ρ = c, the value of w is c k , where k is the number of occurrences of x i in w. As Z n ≏ w holds in K ρ 3 , the two values should coincide whence k = 2 n−i . In a similar manner one can verify that the only letters occurring in w are x 1 , x 2 , . . . , x n . We have already shown that Z n is an isoterm relative to K 3 considered as a plain semigroup. Hence w must be a proper involutory word, that is, it has at least one occurrence of a 'starred' letter. We fix an i ∈ {1, 2, . . . , n} such that x ⋆ i occurs in w and substitute h 1 for x i and 1 for all other letters. It is easy to calculate that the value of the word Z n under this substitution is c 2 n−i −1 h 1 . Since h ρ 1 = h 2 in K ρ 3 and x i occurs 2 n−i times in w, the word w evaluates to a product p of 2 n−i factors each of which is either h 1 or h 2 and at least one of which is h 2 . As Z n ≏ w holds in K ρ 3 , the value of p must coincide with c 2 n−i −1 h 1 , which is only possible when the first and the last factors of p are h 1 . Then the relations (2) and (4) ensure that the value of p is c k h 1 , where k is the total number of occurrences of the factors h 1 h 1 and h 2 h 2 in p. However, p has at least one occurrence of h 1 h 2 and at least one occurrence of h 2 h 1 , and therefore k ≤ 2 n−i − 3, a contradiction. Remark 5. To get a version of Theorem 7 that could be stated and justified without any appealing to geometric considerations, one should change W n to K n in the formulation of Theorem 7 and refer to Lemma 2 instead of Lemma 1 in its proof. (Recall that we outlined a 'picture-free' proof of Lemma 2 at the end of Section 1.) This reduced version of Theorem 7 still suffices to solve the finite basis problem for the identities holding in the Kauffman monoids. The same observation applies to Theorem 9. Remark 6. Theorems 7 and 9 imply that each of the monoids W n and K n with n ≥ 3 is nonfinitely based as both a plain semigroup and an involution semigroup with either reflection or rotation. For the sake of completeness, we mention that the monoids W 2 and K 2 are easily seen to be commutative and hence they are finitely based by a classic result of Perkins [1969]. Moreover, both reflection and rotation act trivially in W 2 , and therefore, W 2 and K 2 are also finitely based as involution semigroups. In a similar manner, Theorem 6 allows one to solve the finite basis problem for many other species of infinite diagram monoids in the setting of both plain and involution semigroups. These applications of Theorem 6 will be published in a separate paper, while here we restrict ourselves to demonstrating another application of rather a different flavor. Recall the classic Rees matrix construction [see Clifford and Preston, 1961, Chapter 3, for details and for the explanantion of the role played by this construction in the structure theory of semigroups]. Let G = G, · be a semigroup, 0 a symbol beyond G, and I, Λ non-empty sets. Given a Λ × Imatrix P = (p λi ) over G ∪ {0}, we define a multiplication · on the set (I × G × Λ) ∪ {0} by the following rules: Then (I × G × Λ) ∪ {0}, · becomes a semigroup denoted by M 0 (I, G, Λ; P ) and is called the Rees matrix semigroup over G with the sandwich matrix P . For a semigroup S, we let S 1 stand for the monoid obtained from S by adjoining a new identity element. Theorem 10. Let G = G, · be an abelian group and S = M 0 (I, G, Λ; P ) be a Rees matrix semigroup over G. If the matrix P has a submatrix of one of the forms a b c 0 or 0 b c 0 where a, b, c ∈ G, or ( e e e d ) where e is the identity of G and d ∈ G is an element of infinite order, then the monoid S 1 is nonfinitely based. Proof. Let E = {e}, · be the trivial group, and let P = (p λi ) be the Λ × Imatrix over {e, 0} obtained when each non-zero entry of P gets substituted by e. Consider the Rees matrix semigroup T = M 0 (I, E, Λ; P ). It is easy to see that the map ϕ defined by is a homomorphism from S 1 onto T 1 . It is known [see, e.g., Hall, 1991, proof of Theorem 3.3] that every Rees matrix semigroup over E belongs to the variety generated by the 5-element semigroup A 2 that can be defined as the Rees matrix semigroup over E with the sandwich matrix ( e e e 0 ). Therefore T 1 lies in the variety var A 1 2 . The inverse image of an arbitrary element (i, e, λ) ∈ T under ϕ consists of all triples of the form (i, g, λ) where g runs over G. If for some j ∈ I, µ ∈ Λ, the triple (j, e, µ) is an idempotent in T, thenp µj = 0 whence p µj = 0 as well. Therefore the product of any two triples (j, g, µ), (j, h, µ) ∈ (j, e, µ)ϕ −1 is equal to (j, gp µj h, µ) and this result does not depend on the order of the factors since the group G is abelian. Taking into account that 0ϕ −1 = {0} and 1ϕ −1 = {1}, we see that the inverse image under ϕ of every idempotent in T 1 is a commutative subsemigroup in S 1 . Thus, S 1 belongs to the Mal'cev product Com m var A 1 2 , and var A 1 2 is locally finite as the variety generated by a finite algebra [see Burris and Sankappanavar, 1981, Theorem 10.16]. In view of Theorem 6, it remains to verify that each Zimin word is an isoterm relative to S 1 . Here we invoke the premise that the matrix P has a 2 × 2-submatrix of a specific form. We fix such a submatrix P ′ of one of the given forms and let Λ ′ = {λ, µ} ⊆ Λ and I ′ = {i, j} ⊆ I be such that P ′ occurs at the intersection of the rows whose indices are in Λ ′ with the columns whose indices are in I ′ . First consider the case when P ′ is either a b c 0 or 0 b c 0 . Clearly, the Rees matrix semigroup U = M 0 (I ′ , G, Λ ′ ; P ′ ) is a subsemigroup of S whence U 1 is a subsemigroup of S 1 . Then the image of U 1 under the homomorphism ϕ is a subsemigroup V 1 of T 1 where V can be identified with the Rees matrix semigroup over E whose sandwich matrix is either ( 0 e e 0 ) or ( e e e 0 ). In the latter case, the semigroup V is isomorphic to the semigroup A 2 . We have already used the fact that every Rees matrix semigroup over E belongs to the variety var A 2 ; this implies that in any case the Rees matrix semigroup B = M 0 (I ′ , E, Λ ′ ; ( 0 e e 0 )) belongs to the variety var V. Hence B 1 ∈ var V 1 , and it is easy to verify that the bijection 1 →1, 0 →0, (i, e, λ) →a, (j, e, µ) →b, (i, e, µ) →ab, (j, e, λ) →ba is an isomorphism between B 1 and the 6-element Brandt monoid B 1 2 (we have defined the latter monoid in the proof of Theorem 7). Thus, B 1 2 lies in the variety var S 1 , and each Zimin word is an isoterm relative to B 1 2 [see Sapir, 1987, Lemma 3.7]. Now suppose that P ′ = ( e e e d ) with d ∈ G being an element of infinite order. One readily verifies that the set R = (k, d n , ν) | k ∈ I ′ , ν ∈ Λ ′ , n = 0, 1, 2, . . . forms a subsemigroup in S while the set J = (k, d n , ν) | k ∈ I ′ , ν ∈ Λ ′ , n = 1, 2, . . . forms an ideal in R. It is easy to calculate that the Rees quotient R/J is isomorphic to the semigroup A 2 , and we again conclude that B 1 2 lies in the variety var S 1 . Remark 7. Suppose that G = G, · in an abelian group, I is a non-empty set, 0 is a symbol beyond G, and P = (p ij ) is a symmetric I × I -matrix over G ∪ {0}. Then one can equip the Rees matrix semigroup M 0 (I, G, I; P ) with an involution by letting 0 ⋆ := 0, (i, g, j) ⋆ := (j, g, i). A version of Theorem 10 holds also for involution monoids that are obtained from such involution semigroups by adjoining a new identity element. Remark 8. Theorem 10 remains valid if we replace the abelian group G by an arbitrary semigroup H from a variety U all of whose periodic members are locally finite. In the matrix ( e e e d ) the elements e, d ∈ H have to be chosen such that e 2 = e, ed = d = de and d n = e for all positive integers n. Remark 9. Readers familiar with the role of Rees matrix semigroups in the structure theory of semigroups will notice that Theorem 10 shows that for each completely simple semigroup S which admits two idempotents whose product has infinite order and whose maximal subgroups are abelian, the monoid S 1 is nonfinitely based. Indeed, S admits a Rees matrix representation M(I, G, Λ; P ) (the construction mentioned above but without 0) such that P has a submatrix of the form ( e e e d ) and d has infinite order in G. The proof of Theorem 10 then shows that S 1 ∈ var(Com m B) and A 1 2 ∈ var S 1 hence each Zimin word is an isoterm relative to S 1 .
2014-05-05T05:34:09.000Z
2014-05-05T00:00:00.000
{ "year": 2014, "sha1": "e233212f841332f8715c4ab7bd1cecb24df60b7d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1405.0783", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e233212f841332f8715c4ab7bd1cecb24df60b7d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
234077143
pes2o/s2orc
v3-fos-license
All-optical radio frequency spectrum analyzer with a 5 THz bandwidth based on CMOS-compatible high-index doped silica waveguides We report an all-optical radio-frequency (RF) spectrum analyzer with a bandwidth greater than 5 terahertz (THz), based on a 50-cm long spiral waveguide in a CMOS-compatible high-index doped silica platform. By carefully mapping out the dispersion profile of the waveguides for different thicknesses, we identify the optimal design to achieve near zero dispersion in the C-band. To demonstrate the capability of the RF spectrum analyzer, we We report an all-optical radio-frequency (RF) spectrum analyzer with a bandwidth greater than 5 terahertz (THz), based on a 50-cm long spiral waveguide in a CMOScompatible high-index doped silica platform. By carefully mapping out the dispersion profile of the waveguides for different thicknesses, we identify the optimal design to achieve near zero dispersion in the Cband. To demonstrate the capability of the RF spectrum analyzer, we measure the optical output of a femtosecond fiber laser with an ultrafast optical RF spectrum in the terahertz regime. All optical radio frequency (RF) spectral measurements provide an effective way to analyze ultrafast signals, and have been applied to optical performance monitoring of optical telecommunication systems, ultrafast optical signal characterization and microwave photonics [1][2][3]. Traditional RF spectral measurement techniques are based on photon detection combined with an electrical spectrum analyzer. Conversely, an all-optical RF spectrum analyzer measures the RF frequency spectrum by detecting the optical signal directly in the optical domain, via the nonlinear optical Kerr effect, allowing it to break the electronic bottleneck arising from the optical-to-electrical conversion. The all-optical approach allows the operation bandwidth to extend into the terahertz regime [4][5][6][7][8][9][10][11]. All-optical RF spectrum analyzers have been demonstrated in a number of nonlinear waveguide platforms, including highly nonlinear fiber (HNLF) [4,5], silicon [6,7], chalcogenide glass [8,9], silicon nitride [10], and doped silica glass waveguide [11]. When implemented on the integrated waveguide platforms, the photonic-chip based RF spectrum analyzer (PC-RFSA) has the advantages of being compact and allowing the integration of additional optical structures for enhanced functionality [12][13][14][15][16][17][18]. Previously, Ferrera et. al. demonstrated a broadband PC-RFSA with a 4cm long high-index doped glass waveguide and obtained a bandwidth of 2.5 THz in the telecom C-band [11]. Despite the wide bandwidth that it achieved, the waveguide structures used in the work were not optimized for C-band operation and had a zero dispersion wavelength (ZDW) at the upper edge of the band at around 1560 nm, instead of at the middle of the band at 1547.5 nm. Due to the non-ideal ZDW location, the achieved wide bandwidth of 2.5 THz was at the expense of using a shorter waveguide and reduced sensitivity. Here, we demonstrate a high-index doped silica based PC-RFSA with improved performance by designing a waveguide structure with low dispersion slope and optimized dispersion for the C-band operation having ZDW at near 1547 nm. We obtained a bandwidth of 5 THz that covers the entire C-band using a 50-cm long waveguide. With the optimized waveguide dispersion, one can obtain broadband operation even with a longer waveguide and results in a higher signal sensitivity. Figure 1 shows the principle of operation of the PC-RFSA. An optical signal under test (SUT) having an RF spectrum described by ( ) and a temporal intensity power spectrum given by |∫ ( ) ( ) | 2 is mixed with a continuous-wave (CW) probe beam, with electric field given by 0 (− 0 ) ), and both are then launched into a nonlinear medium [4]. Due to the near-instantaneous response of the cross-phase modulation (XPM) effect [19] in the medium, the phase of the CW probe beam is modulated by the intensity of the SUT, with a modulation index of 2 , where and are the nonlinear parameter and effective propagation length of the waveguide, respectively. As the combined signals propagate through the nonlinear waveguide, frequency modulation sidebands are generated around 0 , with their combined optical spectrum being proportional to ( 0 ) 2 ( − 0 ) due to the XPM [4]. This results in the RF spectrum ( ) of the SUT being mapped onto the sideband of the optical spectrum of the probe light, which can then be measured directly by an optical spectrum analyzer (OSA). Since this is an all-optical operation, in principle it can achieve unlimited bandwidth as it is not limited by the electronic bandwidth from optical-to-electrical conversion. In practice, effects such as group velocity mismatch between the SUT and the CW probe light, limit the bandwidth according to where is the dispersion of the medium, / is the dispersion slope at the wavelength where the SUT is centered, and Δ is the wavelength spacing between the SUT and the CW probe light [11]. Therefore, waveguides with both low dispersion and low dispersion slope are crucial to maximize the device bandwidth. The optimal bandwidth for a given probe to SUT wavelength separation of Δ is obtained by setting the probe and SUT wavelength symmetrically with respect to the ZDW [4]. For C-band application, the ideal ZDW location should be at the center of the band of around 1547 nm. To optimize the waveguide for the PC-RFSA for operation in the C-band, we calculate the waveguide dispersion as a function of geometry and wavelength. Figures 2(a) and 2(c) show the TE and TM dispersion maps as a function of waveguide thickness for a width of 2 m. The wavelength dependent material refractive indices used in the calculation are obtained from film index measurements to account for material dispersion. The gray dashed lines in the figures reveal the zero-dispersion points over the mapped space and they indicate that for operation with zero-dispersion wavelength of around 1547 nm, the thicknesses of the waveguide is 1.27 m for TE and 1.35 m for TM modes. Figures 2(b) and 2(d) show the calculated nonlinear parameter for the TE and TM modes of the waveguide versus thickness. The nonlinearity increases with decreasing thickness and wavelength due to stronger optical confinement. However, at 1547 nm there is minimal change in for thicknesses from 0.5 m to 1.5 m. A series of 50-cm and 100-cm long spiral waveguides with a width of 2 μm and thickness ranging from 1.00 μm to 1.50 μm in steps of 0.25 μm were fabricated using the process described in [20][21][22][23][24][25][26] to investigate their performance as PC-RFSAs. The inset in Fig. 3 shows a scanning electron microscope (SEM) image of the fabricated buried waveguide with cross-section 2 μm × 1.25 μm. The core refractive index is 1.7 and the cladding index is 1.45. It can be seen that there is a sidewall angle of approximately 8° resulting from the etching process. With the mode transformers at the input and output ends of the spirals, the coupling loss between the optical fiber and waveguide is about 1 dB/facet, while the propagation loss varies from 0.06 to 0.1 dB/cm at 1547 nm for the different thicknesses. The calculated dispersion curve for a 2 μm × 1.25 μm waveguide (Fig. 3) shows that the ZDWs are at 1542 nm and 1503 nm for the TE and TM modes, respectively. At 1547 nm, the desired operation point for Cband application, the dispersion is −2.6 ps•nm −1 •km −1 for the TE mode and −22.1 ps•nm −1 •km −1 for the TM mode, while the dispersion slopes are −0.47 ps•nm −2 •km −1 and −0.55 ps•nm −2 •km −1 for TE and TM modes, respectively. The dispersion for the TE mode across the whole C and L bands remains low at < 50 ps•nm −1 •km −1 . The setup for the PC-RFSA measurement is shown in Fig. 1 where tunable narrow-linewidth laser sources, TLs1 and TLs2, with wavelengths at 1 and 2 , respectively, served as the SUT and TLs3 with wavelength 3 as the CW probe. The two SUT signals were initially set at 1547 nm to fully utilize the entire C-band spectrum. Their spacings were then detuned in opposite directions from the C-band center to simulate the broadening of the SUT frequency spectrum in the measurement. The probe signal from TLs3 was fixed at 1601 nm, far away from the SUT signals to avoid spectral overlap between the four-wave-mixing signal from the SUT and XPM signal from the probe, when the bandwidth measurement is performed in the whole C-band. PCs, PBS and PM were used to ensure that the polarization states of all the three signals were aligned to the TE or TM mode of the waveguide. High-index doped silica 50 5.0 This work Figure 4 shows the normalized XPM power as a function of frequency separation between TLs1 and TLs2 for the set of 50-cm spiral waveguides having thicknesses between 1.00 m to 1.50 m for the TE mode with each of the measured data fitted to parabolic curves. For the 1.25 μm thick waveguide, we obtain a 3-dB bandwidth for the TE mode of ~ 5 THz and 3.43 THz for the 50 cm and 100 cm waveguides, respectively. Table 1 summarizes the results from the literature, showing that the 5 THz bandwidth obtained in this work corresponds to the highest value reported so far. We also obtained a XPM walk-off delay, defined as of 70 for the 1.25 m thick waveguide, indicating the walk-off delay influence on XPM is extremely small and the nonlinear interaction is maintained over large ∆λ. Fig. 4. Normalized XPM power as a function of the frequency spacing between TLs1 and TLs2 for TE mode. The measured data are fit with the 2 nd order polynomial function. Table 2 lists the measured 3-dB bandwidths for all three fabricated waveguides using the experimental setup in Fig. 1 and the same wavelengths. Since the ZDWs are located outside of the C-band for the 1.00 m and 1.50 m waveguides, their obtained bandwidths are limited. However, one can shorten the waveguide to further increase the bandwidth if desired, similar to [11]. Figure 5 compares the PC-RFSA sensitivity for a 50-cm and a 100-cm spiral waveguide, with the sensitivity defined as the power level of the generated XPM signal that is 3 dB above the OSA (Yokogawa, AQ6370D) noise floor [2] for a given probe power. This represents the minimum power level that the PC-RFSA can detect the 3-dB bandwidth of the SUT. The frequency separation between the SUT signals was set to 625 GHz, with 1 , 2 , 3 set at 1550 nm, 1555 nm, 1580 nm, respectively. By setting the probe power to 0 dBm, the relationship between the input SUT power and generated XPM power from the 50-cm and 100-cm long spirals is shown in Fig. 5. Due to the accumulated nonlinear effect and low propagation loss of the waveguide in the C-band, the 100-cm long spiral waveguide produced a higher XPM signal. At the lowest noise floor setting of the OSA at −88 dBm, the minimum resolvable input SUT power value was 1.5 dBm for the 100-cm long waveguide, which was 2 dB lower than that for the 50-cm long waveguide. To investigate the distortion of the PC-RFSA, we measured the RF spectrum of a femtosecond fiber laser (PriTel, with a pulse width of 721 fs and repetition rate of 20 MHz), centered at 1547 nm with an average power of 2.5 mW. The probe was centered at 1601 nm with power of 45 mW. Figures 6(a) and 6(b) show the temporal intensity from autocorrelation measurements with its corresponding optical spectrum. The RF spectrum of the femtosecond pulse, measured by the PC-RFSA, is depicted in Fig. 6(c). The PC-RFSA measured the RF spectrum as broad as 2 THz. The dashed curve in Fig. 6(c) represents the simulated RF spectrum, fit to a sech 4 function [9]. As shown in Fig. 6(c), the measured RF spectrum agrees well with theory, which proves that our PC-RFSA is suitable for RF spectrum analysis of signals with bandwidths in the terahertz regime. In conclusion, we present a PC-RFSA that achieves an RF bandwidth extending well into the terahertz regime, based on a high-index doped silica waveguide. We investigate the dispersion and nonlinear parameter for different waveguide dimensions. By engineering the dispersion, a 3-dB bandwidth of 5 THz is achieved, which is twice as large as that previously reported [11]. The measurement of the RF spectrum of a femtosecond fiber laser demonstrates its usefulness for frequency spectrum analysis of ultra-broadband optical signals involving frequencies in the terahertz range.
2021-05-10T00:03:38.059Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "6fddf8501aaf1273ada2b8f3dee9ef5c130cf9b3", "oa_license": "CCBY", "oa_url": "https://www.techrxiv.org/articles/preprint/All-optical_radio_frequency_spectrum_analyzer_with_a_5_THz_bandwidth_based_on_CMOS-compatible_high-index_doped_silica_waveguides/13635380/files/26177834.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "2b443b3500eb771153d3d445c863987c2b7c94fd", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
234763121
pes2o/s2orc
v3-fos-license
Exact time-dependent density functional theory for non-perturbative dynamics of helium atom By inverting the time-dependent Kohn-Sham equation for a numerically exact dynamics of the helium atom, we show that the dynamical step and peak features of the exact correlation potential found previously in one-dimensional models persist for real three-dimensional systems. We demonstrate that the Kohn-Sham and true current-densities differ by a rotational component. The results have direct implications for approximate TDDFT calculations of atoms and molecules in strong fields, emphasizing the need to go beyond the adiabatic approximation, and highlighting caution in quantitative use of the Kohn-Sham current. By inverting the time-dependent Kohn-Sham equation for a numerically exact dynamics of the helium atom, we show that the dynamical step and peak features of the exact correlation potential found previously in one-dimensional models persist for real three-dimensional systems. We demonstrate that the Kohn-Sham and true current-densities differ by a rotational component. The results have direct implications for approximate TDDFT calculations of atoms and molecules in strong fields, emphasizing the need to go beyond the adiabatic approximation, and highlighting caution in quantitative use of the Kohn-Sham current. I. INTRODUCTION For simulating dynamics of electrons in nonperturbative fields, time-dependent density functional theory [1][2][3][4] (TDDFT) has emerged as a key approach, due to its favorable system-size scaling. In theory, TDDFT is an exact formulation of quantum mechanics which provides a computationally tractable approach for tackling calculations involving many-body problems in external time-dependent fields. Mapping to a non-interacting system, the Kohn-Sham (KS) system, that exactly reproduces the one-body density allows the computation of much larger systems than with traditional wavefunction methods, with no restriction on the strength of the applied fields nor on how far the system is driven from equilibrium, see Refs. [5][6][7][8][9][10] for examples in a range of recent applications. TDDFT does not, however, come without its own difficulties; in particular, the exchange-correlation (xc) potential in which the many-body effects are "hidden" needs to be approximated as a functional of density, and the exact xc potential depends on the density in a spatially-and time-non-local way. This dependence is neglected in adiabatic approximations used in calculations today, where the instantaneous density is inserted into a ground-state xc approximation. A crucial question is how well these approximations accurately capture the true dynamics. The lack of memorydependence is believed to be responsible for errors in its predictions e.g. Refs. [11][12][13][14][15][16][17][18][19][20][21][22], including sometimes qualitative failures. Still, the approximations are often accurate enough to be useful, and some characterization of when to expect the adiabatic approximation to work well has also been done [23]. Studies of the exact xc potential have been made, to compare against approximate potentials, and also to study the impact of its features on the resulting dynamics. Such studies require two ingredients: first, an exact calculation of dynamics of interacting electrons, and second, an inversion of the TDKS equations to find the exact potential. Because of challenges in obtaining these ingredients, the stud-ies have so far been limited to model systems [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39] involving either one-dimension (1D) and/or two electrons, or involving only small perturbations away from the ground-state. In this work, the exact time-dependent KS (TDKS) potential is found for the first time, for a real threedimensional (3D) multi-electron atom in the nonperturbative regime. We find dynamical step and peak features that have a non-local-in-space and -in-time dependence on the density. The results have direct implications for TDDFT calculations of atoms and molecules driven far from their ground-state, as these features are missing in adiabatic approximations. They justify the relevance of the previous 1D studies, where similar dynamical step and peak features are found in the correlation potential. Moreover, the example explicitly demonstrates that the KS current-density differs from the true current-density by a rotational component. Although this has been recognized before to be theoretically possible [35,[40][41][42][43], not only is this difference neglected in applications today where typically the current calculated from the KS orbitals is assumed to represent the true current [44,45], but the difference has not been demonstrated for systems beyond the linear response regime. II. DYNAMICS IN THE HELIUM ATOM The system we study is the field-free evolution of a superposition state of the Helium atom, as might be reached, for example, by applying a field that is turned off after some time. The lowest few eigenstates of this atom were found using the time-dependent closecoupling method, making a partial wave expansion in coupled spherical harmonics, and using the finite element discrete variable representation to discretize the radial degrees of freedom [46,47]. We consider here linear superpositions of the singlet ground state 1 1 S 0 , denoted Ψ 0 , and singlet first excited state 2 1 P 1 that has angular quantum numbers L = 1 and M = 0, denoted here Ψ 1 , so the exact time-evolution of the two-electron state is where ω = E 2 1 P − E 1 1 S = 0.77980 in atomic units (a.u.), is the frequency with which the system oscillates.The parameter a gives the relative fraction of the excited state,for example a = 1 in the case of a 50:50 superposition. We aim then to find the time-dependent KS potential which reproduces the exact density of the interacting state Eq. (1): We note here that the results we find for the xc potential apply to far more general dynamical situations than the field-free superposition state dynamics: due to an exact condition [48], the xc potential applies to any situation where the instantaneous interacting state is given by Eq. (1) at some time t, and the KS state is a Slater determinant (see Appendix A). Now, the TDDFT xc potential depends on the choice of the initial KS state [1,37,49]; the 1-1 density-potential mapping holds only for a given initial state, which endows v XC (r, t) with a functional dependence on both the true and KS states, v XC [n; Ψ(0), Φ(0)](r, t). In principle, one can begin in any initial KS state that reproduces the density of the initial interacting state and its first timederivative; the structure of the exact xc potential has a strong dependence on this choice [25,28,31,32,37]. The choice we make here is a Slater determinant: this is the natural choice if the state Eq. (1) is reached from applying an external field to a ground-state and then turning the field off. The Slater determinant is the natural choice in most physical situations, because they begin in the ground-state (see also discussion in Supplementary Material). One would use ground-state DFT to find the initial KS orbitals, and by the ground-state theorems, this is a Slater determinant. Since the KS evolution involves a one-body Hamiltonian, the state remains a single Slater determinant. For our two-electron spin-singlet system, this means that we always have a single spatial KS orbital that is doubly-occupied, and must have the form: to reproduce the exact interacting density of Eq. 2 with the phase α related to the current j through the equation of continuity, Inverting the TDKS equation yields the exact KS poten- The exact xc potential is then obtained from with the Hartree potential v H (r, t) = n(r ,t) |r−r | d 3 r and external potential v ext (r, t) = −2/|r|. Further, one can isolate the correlation component by noting that for our Thus, finding the exact xc potential reduces to solving Eq. (4) for α(r, t). We note here that for a different choice of initial KS state, e.g. using a two-configuration state that is more similar to that of the actual interacting state, the inversion to find v XC involves an iterative numerical procedure [50][51][52]; some examples for the 1D analog of the dynamics here can be found in Refs. [28,31,32]. This could be a more natural state to begin the KS calculation in some situations, e.g. if the state was prepared in such a superposition at the initial time, however it is inaccessible in a KS evolution that begins in the ground state, as discussed earlier. The importance of judiciously choosing the KS initial state when using an adiabatic approximation has been realized and exploited in strong-field charge-migration simulations [9,53,54]. Eq. (4) has the form of a Sturm-Liouville equation, which has a unique solution for α(r, t) for a given boundary condition. Thanks to the azimuthal symmetry of our density ( M = 0 at all times), we need solve this in effectively two dimensions. We construct an explicit matrix representation of the operator ∇ · n(r, t)∇ in spherical coordinates using the fourth order finite difference scheme subject to the following boundary conditions: α(r → ∞, t) = 0 and ∂ ∂θ α(r, t)| θ=π,0 = 0 . (7) Choosing this boundary condition at t = 0 yields α(r, 0) = 0 since initially the current is zero, and fixes our initial state as φ(r, 0) = n(r, 0)/2. The Runge-Gross theorem then ensures that there is a unique v XC (r, t) that reproduces the exact n(r, t) and yields a unique α(r, t) at later times [55]. Subject to the boundary conditions Eq. 7, the numerical inversion of the matrix operator ∇·n(r, t)∇ results in the solution of Eq. (4) for α(r, t) which in turn when inserted into Eq. (5) yields the KS potential, v S (r, t) (some details in Appendix B). III. RESULTS Several symmetry features of the dynamics of our system simplify the analysis. The azimuthal symmetry mentioned earlier together with the fact that the chosen superposition is one of an L = 0 and L = 1 state, mean that the density, current, and potentials in the lower half-plane (π/2 < θ < π) exactly follow those in the upper half-plane (0 < θ < π/2) a half-cycle out of phase, O(r, π − θ, t) = O(r, θ, t + T /2). (See also movies of the density, current-density, and correlation potentials in Supplementary Material). Further the simple form of the superposition means that O(r, T − t) = O(r, t). Thus we show time-snapshots only over a half cycle in the lower octant. FIG. 1. Correlation potential, vC(r, t) at t = 0 for the 50:50 superposition case (α = 1) in the range π/2 < θ < π at times t = 0, T /8, T /4, 3T /4. Figure 1 shows snapshots of the correlation potential indicated by fractions of the period of oscillation, T = 2π/ω = 8.057 a.u. One immediately notices the unmistakable presence of the step and peak features in the exact correlation potential that have been shown to arise in many 1D model systems [23][24][25][26][27][28][29][30][31][32][33][34]. The step and peak feature is initially most prominent in the region swept by π/2 < θ < π, and then decreases in magnitude, gliding out of this region and appearing on the other side of θ = π/2 at t = T /2. As in the 1D case, this time-dependent step has a spatially nonlocal and non-adiabatic dependence on the density and is completely unaccounted for in the adiabatic approximations: They are missing even in the exact adiabatic approximation, i.e. evaluating the exact ground-state xc potential on the instantaneous density [24][25][26][27]29]. These features often dominate the KS potential (see Fig. 2) and have been shown to be responsible for various errors in simulations using adiabatic approximations in 1D e.g. [25,29]. Here we find they persist just as vigorously, with the same order of magnitude, in real 3D atoms driven far from their ground-state. This justifies the relevance for real systems of the conclusions drawn from the 1D studies, and shows that such strong correlation effects are not a consequence of reduced dimen-sionality, as might have been assumed from groundstate systems [56]. These dynamical steps are distinct to those arising from fractional charges [38,57,58], or in response situations [59], as in the 1D case, and we expect they generically appear when a system is driven far from its ground-state. For the dynamics of this particular superposition state, at any instant of time, the correlation potential asymptotes to the same (time-dependent) value in every direction in the lower half plane, while asymptoting to a different value in the upper half plane (recall O(r, T − t) = O(r, t)). At θ = π/2 there is a step and peak in v C in the θ−direction which gives a force that ensures KS currents, like the true currents, do not cross the xy-plane. This complex structure makes the inversion numerically unreliable right at θ = π/2. Along the θ = π/2 plane, the density of the P state vanishes, and the large change in the potential may be somewhat reminiscent of the abnormal divergent behavior along the HOMO nodal plane found in the ground-state potential [60]. Unlike the ground-state case, however, our density does decay differently along that plane than in other directions, and moreover cannot be captured by any adiabatic approximation. Decomposing the terms in Eq. (5), we find that the peak tends to arise from the second term while the third term results in the step. Because the KS current j S = n∇α, the second term and the peak are related to the local velocity j S /n, while the step when a cut is taken across a fixed θ is related to the radial integral of the local acceleration,α(r, t) = r ∇α(r , t).r dr . We note that the appearance of such dominating steps in the correlation potential is fundamentally linked to the difference in configurations of the interacting and KS states: the interacting system is a superposition of a ground and excited state, quite distinct from the KS Slater determinant structure. Tuning down the electroninteraction dampens the peak structure but the step remains. Fig. 2 shows the components of the exact KS potential. We observe that in the central region where most of the density is localized, the force from the correlation potential is much smaller than that from the exchange and Hartree terms. It is in fact of similar magnitude to that in the ground-state [61]. Near the density-minimum, where the excited state begins to dominate over the ground-state, the correlation potential rises, and then falls before leveling out. In this region, the slopes are such that the step appears to be keeping different parts of the density separate, while the peak corrects for dynamical Coulombic electron-interaction effects. The lack of these features in the adiabatic approximations suggests that the resulting densities will not be as structured, and will tend to underestimate oscillation amplitudes in the dipole moment (as seen in the 1D case [32]). Taking different superpositions of the ground and excited states shows that the step and peak features are universally present in real 3D systems. Figure 3 shows the KS and correlation potentials at the initial time, when a in Eq. 1 is changed through 0, 0.5, 1, 2, ∞ . We see that, for finite values of a, as the fraction of excited state is increased the step and peak decrease in magnitude but extend over a larger region and move inward where more of the density is. The very sharp peak and large step seen when a = 0.5 (note it is scaled to fit on the plot) occur at a sharp minimum of the density and has a smaller impact on the ensuing dynamics than the softer but still prominent structures at large a occurring in regions of greater density. When the excited state is fully occupied (a = ∞) the KS potential is such to maintain the constant excited 1 P density at all times with a noninteracting doubly-occupied orbital, and the structure is not unlike that seen in the corresponding 1D excited helium atom of Ref. [37], in both magnitude and shape; again, even the adiabatically-exact potential would have a completely different structure. IV. TRUE AND KOHN-SHAM CURRENT Finally, we ask, how closely does the exact KS system reproduce the exact interacting current in this case? It was recognized in the early days of TDDFT, that the exact KS current could differ from the true current by a rotational component [35,[40][41][42][43], but how large this difference could be for realistic systems in the non-perturbative regime was unknown. The KS and true currents are equal in their longitudinal component, thanks to the equation of continuity, ∇ · j = − ∂n ∂t , but they can differ in their rotational component. Indeed, for the two-electron singlet case with the KS system represented by a Slater determinant, it follows from Eq. 3 that j s (r, t) = n(r, t)∇α(r, t). This implies that the true ; the latter comes only from numerical error, and is negligible except near the origin and at large r where the denominator is very small. We note that the curl of the current is only non-zero in the azimuthal direction, and this is the component of the curl that is plotted in the figure. V. CONCLUSION In summary, we have shown that the non-adiabatic dynamical features of the exact correlation potential, previously seen to arise in 1D model systems persist with comparable magnitudes in real 3D systems, and are not a consequence of reduced dimensionality. The results inform the on-going development of more accurate functionals in TDDFT that capture these features [23,28], pressing the case to go beyond adiabatic functionals. Hybrid functionals, including rangeseparated ones, where non-local density-dependence arises from the orbital dependence in exact exchange, do not capture these features; this is particularly evident in the present two-electron case, where exact-exchange simply cancels the self-interaction in the Hartree potential. Furthermore, we have demonstrated that the true interacting current differs from its KS counterpart, with the difference depending on the relative proportions of ground and excited state composing the state. The results thus advise caution when computing the currentdensity from the KS orbitals; this will be inherently approximate even if the exact xc functional was somehow known and used.
2021-05-19T01:16:03.232Z
2021-05-18T00:00:00.000
{ "year": 2021, "sha1": "9c0f15635b38ed4d12dc50010a9d123ae92f8400", "oa_license": null, "oa_url": "https://repositorio.uam.es/bitstream/10486/699182/2/exact_dar_PR%20A_2021.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9c0f15635b38ed4d12dc50010a9d123ae92f8400", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259639076
pes2o/s2orc
v3-fos-license
Influence of maternal breastfeeding practices on the anthropometric status of their infants (0-12 months) in Arochukwu L.G.A Abia state, Nigeria Background: Breast milk remains the best start to life in all areas of infants’ development . Cow milk is best for cows and human breast milk is best for human babies. Objectives: This study assessed the influence of maternal breastfeeding practices on the anthropometric status of their infant (0-12 months) in Arochukwu L.G.A. Abia State. Methods: A total of 250 mothers were selected using simple random sampling technique. Data on socio-economic/demographic characteristics, knowledge, attitude and practice of mothers towards exclusive breastfeeding, factors that influence maternal breastfeeding. Anthropometric measurements of weight and height were taken using standard procedures. The IBM SPSS version 20 was used to analyze the data. WHO anthro was used to compute anthropometric status of children. Results: About 38.4% of the mothers were aged between 20-24 years. Majority of the mothers (62.5%) were married. About 73. 2% of the mothers earned below ₦20 , 000 monthly. About 42.4% of mothers exclusively breastfed their babies while about 33.2% of the children were initiated to breast milk within 30 mins after birth. About 42.4% of the mothers breastfed their babies for about 0-6 months before introducing complementary foods. Half of the mothers (53.6%) had a good knowledge on exclusive breastfeeding; 41.2% had poor knowledge, while a few (5.2%) had an excellent knowledge. About 33.6% of the mothers fed their babies with infant formula, while 24.8% fed their babies with breast milk after delivery. Majority (71.2%) fed their children 3-4 times daily. More than half of the infants (56.8%) were stunted, 40.4% were underweight and 15.2% wasted. The prevalence of stunting and wasting was higher in males than in females; wasting was more among 7-9 months old children. Conclusion: The high levels of malnutrition in this study underline the great need for nutrition intervention. Exclusive breastfeeding and timely introduction of appropriate complementary feeding is a key factor in child growth. Introduction Breastfeeding is feeding an infant or young child with breast milk directly from human breast.Breastfeeding is an unequal way of providing food for the healthy growth and development of infants.Breastfeeding is very useful both for the child and the mother.For the child, the breast milk is easy to digest; it is the most complete form of nutrition for the infants (WHO, 2007) [15] .Exclusive breastfeeding based on the WHO (2004) [16] definition refers to the practice of feeding only breast milk (including expressed breast milk) for six months.Breastfeeding should be initiated within the first hour after birth.Exclusive breast feeding reduces infant mortality due to common childhood illness such as diarrhea or pneumonia and makes for a quicker recovery during illness.In Nigeria, breastfeeding practices continue to fall well below the WHO/UNICEF recommendations.For instant, current percentage of children who are breastfed exclusively in Nigeria is 17% (UNICEF, 2015) [14] .Various factors associated with sub-optional breastfeeding practices have been identified in various settings; these include maternal characteristics such as age, mental status, occupation, educational level, antenatal and maternity health care; partner or friend family, fear of inadequate milk supply (UNICEF, 2008) [12] .Successful breastfeeding requires adequate nutrition, rest and support of all who care about the well-being of mother and infants.Breast milk remains the best start to life in all areas of infant development (UNICEF, 2008) [12] .Breast milk alone is the ideal nourishment for infants for the first 6 month of life, providing all the nutrients including vitamins and minerals an infant needs at this period of life. ~ 40 ~ Human milk contains many immunological agents that protect infants against variety of infections.Poor infant feeding practices particularly inadequate breastfeeding or early shift to bottle and complementary feeding is associated with high rate of infection and malnutrition resulting in undesirable anthropometric profile in infants and children.Anthropometric measurements are useful criteria for assessing nutritional status.It is one of the simplest and cost effective methods of assessment of growth and development especially in infants and children.Anthropometric measurements are concerned with the measurements of the variation of physical dimensions.Certainly, physical dimensions of the body are much influenced by nutrition particularly in the rapid growing period of infancy (Kamla, 2005) [6] .Anthropometry has become the conventional practical tool for evaluating the nutritional status of infants and children in Nigeria.The global initiative to promote exclusive breastfeeding is still a concern in Nigeria.Thus this paper focused on the influence of maternal breastfeeding practices on the anthropometric status of their infants (0-12 months) in Arochukwu L.G.A Abia State, Nigeria. Study design The study was a cross sectional survey designed to establish influence of maternal breastfeeding practices on the anthropometric status of their infants (0-12 months). Study area This research was carried out in Arochukwu Local Government Area of Abia State.The local government headquarters is in Arochukwu which is a small town located at the Southern end of Abia State, Nigeria.The Local Government Area comprises of four communities and 88 autonomous villages making up Arochukwu L.G.A. Sample population The study comprised the infants and their mothers attending health centers in Arochukwu Local Government Area of Abia State. Sample size The sample size was calculated using the formula; Z = Standard normal deviate which is 1.96 for 95% confidence interval.P = Percentage of nursing mothers practicing exclusive breastfeeding.P will be taken to be 17% (UNICEF, 2015) [14] .X = Width of confidence or required precision levels taken to be 5%.n = Sample size. Sample size was rounded up to 250 to make up for dropouts and incorrectly filled questionnaires. Sampling and sample technique There are four communities in Arochukwu.Purposive sampling technique was used to select 250 children and their mothers from the four communities on their immunization days at their health care centers. Data collection Pretested, validated questionnaires were administered to consenting women by trained interviewers during post natal clinic sessions in the selected health centers.The questionnaire contained questions on socio-demographic characteristics of the women, personal data, knowledge and practice of mothers towards breastfeeding, factors that affect maternal breastfeeding and infant anthropometry.The children were weighed using the special child weighing scale (Salter scale).The scale's balance indicator was checked to ensure that the scale was balanced.The child was undressed with the help of the mother\care-giver.The child was laid safely in the centre of the weighing scale.The reading was taken to the nearest 0.1kg and recorded.The recumbent length was obtained using a crown-heel length board.Shoes, stockings and hats were removed and heirs flatten.The child was laid flat on his back on the measuring board; the child's head was located at the end with fixed headpiece and the child's eyes facing upwards.The child's trunk and legs were aligned.Both legs were extended by placing one hand on the knees to obtain full extension.The foot piece was brought firmly against the child's heels, pointing the toes upwards.The reading was taken to the nearest 0.1cm and recorded.The Head circumference was obtained using a non-stretch fibre glass tape.The child was made to sit on the mother's lap to hold the child's head still.A non-stretch fibre glass tape was placed horizontally around the head at a level just above the eyeball (supralorbital), the ears and the prominent bulge at the back of the head.The tape was tightened over the fore-head but not to compress soft tissues.Then the reading was taken to the nearest 0.1cm and then recorded.Mid upper arm circumference (MUAC) was obtained by Sitting the child on the mother's lap, the child was undressed.The left hand was removed and hanged freely.A non-stretch fibre glass tape was placed around the left arm midway between the acromion process of the scapula and the olecranon process of the ulna.The reading was taken to the nearest 0.1cm and recorded. Data and statistical analysis The classification of the children into MUAC, weight-forage, height-for-age and weight-for-height anthropometric indicator was calculated using z-scores by WHO (2006) growth standard.The variables were coded as; ~ 41 ~ Knowledge was scored and categorized using grades as shown below: Information gathered from the questionnaire was coded and entered into the computer using Statistical package for the social sciences (SPSS), version 20.A descriptive statistical method such as frequency and percentage was used to analyze data on socio-economic, knowledge, attitude and feeding practices.Pearson's correlation coefficient was used to determine relationship between the parental knowledge, attitude and feeding practices and nutritional status of the children. Result Table 3 shows the socio-economic characteristics of mothers.About 30% of the mothers were less than 20 years, 38.4% were between 20-24 years while 12.4% were between 25-30 years.Most of the women (64.2%) were married while the remaining 34.8% were single.About a quarter of the women (45.6%) had primary education as their highest educational qualification, while 34.4% had secondary education only.Only a few (9.8%) had tertiary education.Furthermore, 43.2% of the women were petty traders while 24.8% and 13.6% were business women and unemployed respectively.More than half of the women (73.2%) earned below ₦20, 000 per month, 10.8% earned between ₦21, 000-₦40, 000 per month, while a few (8.8%) earned above ₦40, 000 per month.Table 5 shows the mothers' practices on exclusive breastfeeding.About 33.2% breastfed their baby within 30mins after delivery, 25.2% within 6-10 hours after delivery, 14.8% within 1-5 hours, 7.2% within 24 hours.About 33.6% of the mothers said infant formula was the first food they fed their baby after delivery, 30.4% gave glucose water, while 10% gave plain warm water.About 44.4% of the mothers said they have continued giving only breast milk to their babies, 17.6% said breast milk and plain warm water, 14.4% said breast milk and family food and a few (9.6%) said breast milk and formula.However, 22% of the mothers said the reason of giving formula and other foods before 6 months is because their babies always cries after breastfeeding.About 38% of the women expressed their breast milk when they left home for some hours, 16.8% use family food while 10.4% gave infant formula to their children when left for some hours at home.Furthermore, 28.4% of the women that used expressed breast milk stored the expressed breast milk using a food warmer while 2.8% used freezer.The results further showed that about 42.4% of the women introduced other foods to their babies from 6 months and above, 27.2% at 3 months and 15.6% at 1 month.Thirty-six percent of the women breastfed their children for 12-13 months.Majority of the women (71.2%) fed their children 3-4 times daily.Table 7 shows the anthropometric measurement of the children by sex.The prevalence of stunting was higher in males (58.9%) than females (52%).The result also revealed that the prevalence of wasting was higher in males (16%) than in females (14.9%).Also, 40.4% of the children which comprised of 49.3% males and 36.6% females were underweight.Furthermore, the result revealed that 17.3% males and 17.1% females were underweight going by their BMI-for-age.Table 8 shows the anthropometric measurements of the infants by age.Majority of the infants (76%) between the ages of 0-3 months had above normal weight for their length, while 43.5% of them between 7-9 months were wasted.In their height-for-age z-scores, majority (82.1%) of the infants between the ages of 0-3 months were stunted.About 34.2% of the infants between 0-3 months and 4-6 months (88.2%) were underweight.About 50.5% of infants between 0-3 months that had normal BMI-for-age. Discussion Most of the mothers were less than 24 years.It could be that the study area was made up of younger adults, and this is in line with a study by Mohidul et al. (2013) [8] where majority of mothers were aged 20-30 years.This further indicates the peak of fertility, and the age at which ladies commonly marry.Majority of the women had primary education as their highest education followed by secondary education.This shows that the respondents mostly had some level of formal education.Educated mothers are mostly employed and are more likely to mixed feed than exclusively breastfeed in the first six months of their child's life.Most women in this study were into petty trading and business.Mohidul et al. (2013) [8] pointed out that maternal employment may not be a constraint to child care, because mothers modified their work patterns to attend to their young children's need; but their employment status could influence child's feeding.More than half of the women (73.2%) earned below ₦20, 000 per month.This study is consistent with the findings of Ene-Obong et al. (2010) [5] . Half of the mothers (53.6%) had a good knowledge of exclusive breastfeeding; about a quarter (41.2%) had poor knowledge of exclusive breastfeeding.This result did not support the 19.2% reported by Mo Oche and Amed.(2011) [7] in Plateau State.Poor knowledge of exclusive breastfeeding is a great barrier to the practice of exclusive breastfeeding.The perceived poor knowledge of respondents on exclusive breastfeeding could partly be attributed to the quality of information transferred to them, not all health care providers have enough knowledge about breastfeeding to inform or help women who are encountering difficulties.About 33.2% breastfed their baby within 30mins after delivery, while 14.8% breastfed within 1-5 hours after delivery.A similar finding was reported in a study of lactating mothers in Ile Ife Osun State, Nigeria where more than 50% of the lactating mothers said the time of initiation of breast milk after delivery was within 30mins (Ojofeitimi, 2000) [10] .The importance of breast feeding initiation within 30 minutes of delivery cannot be overemphasized.It has been shown to reduce neonatal morbidity and mortality.About 44.4% of the mothers said they have been giving only breast milk to their babies.This indicates that 44.4% of the women practiced exclusive breastfeeding.About 38% of the women expressed their milk for their babies to be fed when they were not at home.Expressing breast milk is important when direct ~ 45 ~ breastfeeding of the baby from the mother is not possible either because the mother is away from the baby or in some health cases (Safaa, 2012) [11] .About 42.4% of the women introduced other foods to their babies from 6 months and above.This perceived time of introduction of complementary food could be attributed to their knowledge, practice and attitude of exclusive breastfeeding.This is in line with the UNICEF (2009) [13] recommendation that breast milk alone is the ideal nourishment for infant for the first six months of life because it provides all the nutrients including vitamins and minerals an infant needs.One third of the women breastfed their children for 12-13 months.This result is in contrast with Eman, et al. (2016) on the study among working class lactating women in Enugu which showed that 50% of the respondents did not breastfeed up to one year.Optimal breastfeeding for 2 years and beyond provides comforts, enhances child spacing and survival (UNICEF, 2009) [13] .Majority of the women fed their children 3-4 times daily.WHO (2004) [16] suggested that children between 6-8 months and 9-11 months be fed 2-3 times and 3-4 times, respectively.More than half of the infants (56.8%) were stunted.The prevalence of stunting reported in this study was higher than 43% and 41% prevalence reported in Nigeria by NDHS (2013) and UNICEF (2009) [13] respectively.About 40.4% of the infants were underweight, 15.2% wasted.This result was similar to NDHS.(2013); this could be attributed to unhealthy feeding practices by their mothers, thus affecting their healthy growth and development.The finding of this study was comparable with 14% wasting recorded among under-five children in Nigeria by UNICEF (2009) [13] .Prevalence of stunting and underweight in this study was high compared to the report by Ekerette and Olukemi.(2016) [3] .The prevalence of stunting was higher in males than females.In agreement with this study, Danjin and Dawud.(2015) [2] in a study among children 0-12 months in Gombe, Nigeria found that prevalence of stunting was more among the male children compared to the females.The prevalence of wasting was higher in males than in females; this was slightly lower than that reported by Ene-Obong et al. (2010) [5] .Adequate nutrition during infancy and early childhood is essential to ensure the growth, health, and development of children.The low prevalence of wasting among 0-3 months and 4-6 months children in this study concurred with the report by Danjin and Dawud (2015) [2] .High prevalence of wasting among 7-9 months old children in this study could be due to the type of complementary foods given to them as breast milk alone cannot sustain then.Majority (82.1%) of the infants within the age of 0-3 months were stunted.Level of stunting at this age in month could be regarded a serious public health problem; it is noteworthy that children who are deprived of nutrients for healthy growth are also deprived of nutrients for healthy brain development and healthy immune systems (Amsalu and Tigabu, 2008) [1] .Infants between 4-6 months were the most underweight; the observed underweight status of the infants at this age could be attributed to early introduction of complementary food by their care givers. Conclusion The high levels of under-nutrition in this study points to the need for nutrition intervention.Exclusive breast feeding and timely introduction of appropriate complementary feeding is a key factor in child growth and development.The assessment of nutritional status showed that many children were stunted, wasted, and underweight and this may be as a result of poor feeding, inadequate complementary feeding practices and poor economic status of their parents.There is need for more education focused on women of reproductive age on the importance of exclusive breastfeeding, followed by adequate complementary feeding. Table 1 : Shows in z scores, Height for age, Weight for age, Weight for height and BMI for age Table 2 : Knowledge grade Table 3 : Personal and socio-economic characteristics of the mothers Table 4 shows the knowledge on exclusive breastfeeding by the mothers.Half of the mothers (53.6%) had a good knowledge on exclusive breastfeeding; about a quarter (41.2%) had poor knowledge, while a few (5.2%) had an excellent knowledge on exclusive breastfeeding. Table 4 : Mothers knowledge on exclusive breastfeeding Table 5 : Mothers practices on breastfeeding and feeding Table 6 : Anthropometric measurement of infants Table 7 : Anthropometric measurement of the infants by sex Table 8 : Anthropometric measurements of the infants by age
2023-07-11T16:27:08.070Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "05a887845517ec071e3f8c2b4215e58d3d72a2ef", "oa_license": null, "oa_url": "https://www.hortijournal.com/article/view/60/3-1-12", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7bc047b99e87b475ca74b3cce322146bd83928da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
225620392
pes2o/s2orc
v3-fos-license
The Application of Computer Modeling Technology in Art Design The art design style of computer modeling is the hot spot of China’s art design in recent years. After entering the 21st century, the concept of art design has been constantly developed and innovated, and a series of designs such as Java and dynamic titles are all the rage. Based on computer modeling art design style of writing with different point of view of reality, tends to be the symbol, the transition of the style, let people can more easily and work more comfortable and more willing to learn the meaning, this article through to the computer modelling of the historical development of the art design style analysis, combined with the graphic sustainable theory to delve into the UI design for readers reference. Introduction International graphic design to notice the importance of the art design concept in the era of development, so a new generation of graphic designers of plane design when more is based on the interactive experience and perception, UI design as the basis, make the transmission of information is more simple and quick, it is for style messy advertising, the media more innovative and practical value. Quasi physical design has more recognizable three-dimensional effect, as a popular style of period of time will not be able to make full use of the design elements, the computer modeling approach can make a design not easy outdated, and highlight the information itself, this is the style in a short period of time all over the world. Swiss internationalist graphic design is the origin of computer modeling design After the Second World War, the development of world graphic design had a period of stagnation. Until the middle of the 20th century, a brand new graphic design style was created in Switzerland, which was characterized by clarity and simplicity and conveyed information in a concise and clear way [1] . It gradually became the most commonly used design style in the world, and was even named "Swiss internationalist graphic design". The following Figure 1 and Figure 2 are the application of graphic design in resource development: Figure 1 and Figure 2, in Zurich and Basel this two cities is the important origin of computer modeling design development city, in the 20th century, the emergence of a series of graphic design masters, and the new graphic design magazine in Switzerland in 1959 to mark the Swiss international graphic design style gradually mature, and for the development of computer modeling design history has extremely important practical significance. The style of internationalism achieves the unification of visual effects by constructing an orderly grid structure and a standardized layout, while in terms of visual expression, asymmetric graphic organization design elements can be adopted for visual impact and innovative design of computer modeling [2] . Because the layout without decorative line is shot into the form of left alignment, it presents very standard and standard characteristics in the plane effect [3] . Emergence and development of computer modeling in art design Web sites in the late 20th century were cluttered with moving pictures. After entering the 21st century, the concept of art design has been constantly developed and innovated, and a series of designs such as Java and dynamic film titles are all the rage [4] . The creation of art design style based on computer modeling, accompanied by different perspectives of reality, tends to change the symbols and styles, making it easier, more comfortable and more willing to understand the meaning brought by the works, which is very important for interactive UI design in the new era. At that time, it took a long time to load the messy information and components. Microsoft noticed this phenomenon, and gave the concept of art design for the new operating system research, and the "Segoe" font production to a large 3 extent broke the deadlock, it USES a simple and clear way to present information, computer modeling highlights the key information; This aspect is very similar to traditional Chinese art style of the UI design concept, it will be more inclined to make engraving, people are not familiar with the effect of the generating process, when people admire don't need to be "read" can feel the beauty and comfort in his works, in the invisible on the basis of the author's idea to seek spiritual resonance whole, to the higher levels of computer modeling interactive UI design. UI interactive application of traditional Chinese art style In the development of traditional art in China, calligraphy, traditional Chinese painting, seal cutting and other arts are more based on freehand brushwork and grasp elements with the help of the force of nature, and pay more attention to the effect of natural integration. Through the integration of personal feelings in the works, both the creator and the viewer can read the mood from the outline and handwriting of calligraphy [5] . Through the color transformation and Angle transformation in traditional Chinese painting to observe the creator's vision and momentum; through the strength and design of seal cutting, the originality and meaning of the creator can be read, as shown in figure 3. Seal cutting has a development history of nearly four thousand years in China. As an ancient art, seal cutting skillfully integrates calligraphy and knife skills, and USES a seal to create freehand brushwork by computer modeling "between square inches". The continuous development of seal cutting technology has gradually evolved into a variety of schools. Seal cutting artists comply with the defects of the materials themselves when creating, and design the blank interactively, so as to achieve The application of Chinese traditional art style in UI interaction is also reflected in calligraphy creation. The works that incorporate the calligrapher's personal feelings, mood and experience are also an interactive appreciation and analysis of full artistic style. With the continuous development of art expression in these years and improvement of Chinese traditional art style of the UI design will be more inclined to make engraving, people are not familiar with the effect of the generating process, when people admire don't even need to "deliberate" also can feel the beauty and comfort in his works, to better according to the author's ideas for spiritual resonance whole, to the higher levels of computer modeling interactive UI design [6] . Evolution of computer modeling style and exploration of interactive UI design The art design style of computer modeling has something in common with the traditional art development in China, and at the same time, it also has the innovative interactive features of UI design in terms of style and expression form. Among them, the characteristics and changes of computer modeling style can be subtly reflected in the painting creation of three periods:  Mona Lisa, as a masterpiece of leonardo Da Vinci, is widely talked about for its mysterious smile. In the flat composition of Mona Lisa, people can not only focus on its artistic value, but also think about the story behind the smile which is highlighted by the computer modeling design.  Sunrise is a masterpiece of monet's impressionism. In the composition, monet abandoned the realistic way and transformed light and shadow based on computer modeling to depict a more representative world and plane for people.  "The scream" as the representative work of edvard munch, highlights the characteristics of expressionism, and the author also believes that computer modeling is an important way of the painting, the beautiful sunset river twisted face, such as to give the impression of is anxiety and pain, and that left a deep impression on the reader is brought by the computer modeling processing lines and pull away effect, on the one hand, amazing, on the other hand is thinking. The creation of art design style based on computer modeling, accompanied by different perspectives of reality, tends to change the symbols and styles, making it easier, more comfortable and more willing to understand the meaning brought by the works, which is very important for interactive UI design in the new era. And because the art expression way is varied, based on computer modeling of art design interactive UI design is a kind of can fully express the creator thought form, it can make more vivid, graphic design at the same time can make work more meaningful and spiritual, the more fit creators, based on computer modeling, prominent, interaction design concept, to design a more timely, characteristic, style of poor. Conclusion To sum up, the internationalist style achieves the unification of visual effects by constructing an orderly grid structure and a standardized layout. In terms of visual expression, asymmetric graphic organization design elements can be adopted for visual impact and innovative design of computer modeling. Computer modeling style of art and design and development of China's traditional art have in common, and at the same time also has the UI design in the style and expression form of innovative interactive features, graphic works step by step "reality" is also the embodiment of the fit reality, from the realistic to the transfer of light and shadow, from Chinese calligraphy to the western painting, are based on computer modeling design of interactive creation, for the contemporary interactive UI design is of great practical significance.
2020-07-16T09:08:06.763Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "b4c9b2ca2494259fc3562210c587805189d69874", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1578/1/012010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "138e20be80e189c32b888252e2f359895766d2c7", "s2fieldsofstudy": [ "Art", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
4885564
pes2o/s2orc
v3-fos-license
Stochastic Observation Optimization on the Basis of the Generalized Probabilistic Criteria Till now the synthesis problem of the optimum control of the observation process has been considered and solved satisfactorily basically for the linear stochastic objects and observers by optimization of the quadratic criterion of quality expressed, as a rule, through the a posteriori dispersion matrix [1-4]. At the same time, the statement of the synthesis problem for the optimum observation control in a more general case assumes, first, a nonlinear character of the object and observer, and, second, the application of the non-quadratic criteria of quality, which, basically, can provide the potentially large estimation accuracy[3-6]. Introduction Till now the synthesis problem of the optimum control of the observation process has been considered and solved satisfactorily basically for the linear stochastic objects and observers by optimization of the quadratic criterion of quality expressed, as a rule, through the a posteriori dispersion matrix [1][2][3][4]. At the same time, the statement of the synthesis problem for the optimum observation control in a more general case assumes, first, a nonlinear character of the object and observer, and, second, the application of the non-quadratic criteria of quality, which, basically, can provide the potentially large estimation accuracy [3][4][5][6]. In connection with the fact that the solution of the given problem in such a statement generalizing the existing approaches, represents the obvious interest, we formulate it more particularly as follows. Description of the task Let the Markovian vector process t, described generally by the nonlinear stochastic differential equation in the symmetrized form where f, f0 are known N -dimensional vector and N  M -dimensional matrix nonlinear functions; nt is the white Gaussian normalized M -dimensional vector -noise; be observed by means of the vector nonlinear observer of form: where Z -L  N -dimensional vector of the output signals of the meter; h(,t) -a known nonlinear L-dimension vector -function of observation; Wt -a white Gaussian L-dimension vector -noise of measurement with the zero average and the matrix of intensity W D . The function of the a posteriori probability density (APD) of process As the main problem of the a posteriori analysis of the observable process t is the obtaining of the maximum reliable information about it, then the synthesis problem of the optimum observer would be natural to formulate as the definition of the form of the functional * is some bounded set of the state parameters t. In the final forming of structure of the criterion of optimality J it is necessary to take into account the limited opportunities of the practical realization of the function of observation h(,t), as well, that results, in its turn, in the additional restriction on the choice of functional dependence h(,t). The formalization of the given restriction, for example, in the form of the requirement of the minimization of the integrated deviation of function Н from the given form Н0 on interval *  during time interval Т allows to write down analytically the form of the minimized criterion J as follows: Thus, the final statement of the synthesis problem of the optimum observer in view of the above mentioned reasoning consists in defining function h(,t), giving the minimum to functional (2). Synthesis of observations optimal control Function APD, included in it, is described explicitly by the integro-differential Stratonovich equation with the right-hand part dependent on h(,t). The analysis of the experience of the instrument realization of the meters shows, that their synthesis consists, in essence, in defining the parameters of some functional series, approximating the output characteristic of the device projected with the given degree of accuracy. As such a series one uses, as a rule, the final expansion of the nonlinear components of vector h(,t) in some given system of the multidimensional functions: power, orthogonal etc. Having designated vector of the multidimensional functions as 1 ... For the subsequent analytical synthesis of optimum vector -function h(,t) in form of (3) we rewrite the equation of the APD (,t) in the appropriate form 1 2 , , , The constructions carried out the problem of search of optimum vector h(,t) is reduced to the synthesis of the optimum in-the-sense -of-(2) control h of the process with the distributed parameters described by Stratonovich equation (in view of representing vector Н0(,t) in the form similar to (3) The optimum control of process (,t) will be searched in the class of the limited piecewisecontinuous functions with the values from the open area Н*. For its construction we use the method of the dynamic programming, according to which the problem is reduced to the minimization of the known functional [1] under the final condition V(tk) = 0 with respect to the optimum functional V = V(,t), parametrically dependent on t  [t0, tk] and determined on the set of functions satisfying (4). For the processes, described by the linear equations in partial derivative, and criteria of the form of the above-stated ones, functional V is found in the form of the integrated quadratic form [1], therefore in this case we have: whence we have optimum vector hоpt: t) we have the following equation: which is connected with the equation of the APD, having after substitution into it expression оpt h the following form: Observations suboptimal control The solution of the obtained equations (6), (7) exhausts completely the problem stated, allowing to generate the required optimum vector -function h of form (3). On the other hand, the solution problem of system (6), (7) is the point-to-point boundary-value problem for integrating the system of the integro-differential equations in partial derivatives, general methods of the exact analytical solution of which , as it is known, does not exist now. Not considering the numerous approximated methods of the solution of the given problem oriented on the trade-off of accuracy against volume of the computing expenses, then as one of the solution methods for this problem we use the method based on the expansion of function v, p in series by some system of the orthonormal functions of the vector argument : , , , , From the point of view of the practical realization the integration of system (8) under the boundary-value conditions appears to be more simple than integration (6), (7), but from the point of view of organization of the estimation process in the real time it is still hindered: first, the volume of the necessary temporary and computing expenses is great, secondly the feasibility of the adjustment of the vector of factors h in the real time of arrival of the signal of measurement Z -is excluded, the prior simulation of realizations Z appears to be necessary (in this case in the course of the instrument realization, as a rule, one fails to maintain the precisely given values h all the same). Thus, the use of the approximated methods of the problem solution (8) is quite proved in this case, then as one of which we consider the method of the invariant imbedding [3], used above and providing the required approximated solution in the real time. As the application of the given method assumes the specifying of all the components of the required approximately estimated vector in the differential form, then for the realization of the feasibility of the synthesis of vector h through the given method in the real time we introduce a dummy variable v, allowing to take into account from here on expression hopt as the differential equation forming with equations (8) a unified system. The application of the method of the invariant imbedding results in this case in the following system of equations: By virtue of the fact that matrix D in the method of the invariant imbedding plays the role of the weight matrix at the deviation of the vector of the approximated solution from the optimum one, in this case for variables i0 the appropriate components D characterize the degree of their deviation from the factors of expansion of the true APD (components D0 -are deviations of the parameters at the initial moment). The essential advantage of the approach considered, despite the formation of the approximated solution, is the feasibility of the synthesis of the optimum observation function in the real time, i.e. in the course of arrival of the measuring information. Example For the illustration of the feasibility of the practical use of the suggested method the numerical simulation of the process of forming vector
2017-09-17T18:56:34.592Z
2012-11-28T00:00:00.000
{ "year": 2012, "sha1": "92c302d1c172f76c2341b836385cd79f29e9e1a9", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/41182", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "55f9ca2943907152799036696e5f2376208781e1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
236315310
pes2o/s2orc
v3-fos-license
Automated Screening and Filtering Scripts for GC × GC-TOFMS Metabolomics Data : Comprehensive two-dimensional gas chromatography mass spectrometry (GC × GC-MS) is of unit mass resolution mass spectrometers are used, many detected compounds have spectra that do not match well with libraries. This could be due to the compound not being in the library, or the compound having a weak/nonexistent molecular ion cluster. While high-speed, high-resolution mass spectrometers, or ion sources with softer ionization than 70 eV electron impact (EI) may help with some of this, many GC × GC systems presently in use employ low-resolution mass spectrometers and 70 eV EI ionization. Scripting tools that apply filters to GC × GC-TOFMS data based on logical operations applied to spectral and/or retention data have been used previously for environmental and petroleum samples. This approach rapidly filters GC × GC-TOFMS peak tables (or raw data) and is available in software from multiple vendors. In this work, we present a series of scripts that have been developed to rapidly classify major groups of compounds that are of relevance to metabolomics studies including: fatty acid methyl esters, free fatty acids, aldehydes, alcohols, ketones, amino acids, and carbohydrates. analysis, S.L.N.; investigation, S.L.N.; resources, S.L.N.; data curation, S.L.N.; writing—original draft preparation, S.L.N.; writing—review and editing, A.P.d.l.M. and J.J.H.; visualization, S.L.N.; supervision, A.P.d.l.M. and J.J.H.; project administration, J.J.H.; funding acquisition, J.J.H. Introduction Most of the GC-based metabolomics applications combine GC with MS detection to help with the identification of unknown analytes. Metabolomics samples typically exhibit high complexity due to their diverse chemical content that is present at wide concentration ranges. In non-target studies, accurate identification of metabolites at low concentrations can be complicated by coelutions and/or peak distortion due to closely/coeluting highly abundant metabolites [1,2]. Low-concentration analytes can also be easily obscured due to noise in the spectrum that can hinder the qualitative identification of peaks based on mass spectral library matching. Meanwhile, overloaded peaks from high-concentration species may lead to inaccurate identification arising from detector saturation and distortion of mass spectra [3]. As a platform, comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOFMS) is an excellent tool for non-target metabolomics. The higher and more effective use of peak capacity when compared to one-dimensional GC methods, results in improved signal-to-noise ratios due to increased signal (focusing/band compression at modulator) and decreased noise (separation of analytes from primary column bleed and coeluting analytes). Consequently, spectra are cleaner, allowing improved compound identification. When compared to LC-MS methods, matrix effects are less in GC-MS, and the technique offers a broad dynamic range [4]. Additionally, GC×GC techniques provide chromatograms with an inherently ordered structure, which is useful for the identification of unknown compounds. Moreover, this technique is advantageous through the possibility of "seeing everything"; the TOFMS allows the capture of complete mass spectra at every point [5]. Due to the above benefits, this instrument is seeing increasingly frequent use for non-target metabolomics studies of biofluids (e.g., urine, blood, sweat), breath, plant extracts, etc. [2,[6][7][8][9][10][11]. However, the amount of data generated from such comprehensive techniques is massive and nearly impossible to handle manually [2,5,12,13]. GC(×GC)-MS systems often use electron impact ionization (EI), which generates highly reproducible fragmentation patterns in both m/z values and the relative abundances of the corresponding ions. This facilitates the construction of databases of searchable mass spectral libraries. When a chromatogram is processed with an MS library database, the final peak table contains a list of tentatively identified analytes with the library match similarity factors. However, despite the use of such databases, a manual review of thousands of peaks in a sample can be a tedious task. Each entry in the peak table must be verified by comparing the retention index and the MS library for a higher assurance of compound identification. Unfortunately, in many studies with unit mass resolution mass spectrometers, many (sometimes most) detected compounds have spectra that do not match well with libraries. While the advancement made in GC×GC-MS systems with high-speed, high-resolution accurate mass spectrometers, or ion sources with softer ionization than 70 eV electron impact may resolve some of the challenges, there are many GC×GC systems presently in use that rely on low-resolution mass spectrometers. With the process of manually examining the peak table, compounds with low spectral match quality can be evaluated along with library retention indices to obtain the final list of provisional identifications. The issue is that when analytes are searched against a library, it is possible to have multiple compounds with the same library hit for structurally similar compounds. It is also common that a detected analyte is not registered in the mass spectral library database. This is especially true for trimethylsilyl (TMS) derivatives of compounds, generated with a gold standard derivatization method for metabolomics samples [14]. Consequently, peak tables suffer from incorrect/ambiguous name assignments [15] and these tentative names must be verified through a manual process by knowledgeable personnel capable of interpreting the mass spectral and elution data. With a complex sample, where the list of analytes can reach several thousands of peaks, this manual process is a significant burden. The complexity of data analysis, rather than the usual culprits of sample preparation or instrumental time, serves as the major bottleneck in GC×GC-TOFMS analysis. In order to speed up and simplify data analysis, script-based filtering of peaks is a promising tool. Scripting involves programming a series of logic rules based on mass spectrometric and/or retention properties for target compounds to determine whether the compound belongs to a specific class [16][17][18]. The extensive and reproducible fragmentation patterns from EI are advantageous for creating mass spectral filtering scripts. The scripts work as a data reduction filter by enabling the classification of chromatographic peaks based on distinguishable features in mass spectral information. Scripting tools that apply filters to GC×GC-TOFMS data were initially used for environmental and petroleum samples. Numerous scripts have been published to aid with, for example, the identification of halogenated species [17,[19][20][21][22]. It was evident that the scripting tool greatly assists with an automated and rapid classification of the compounds in GC×GC chromatograms [16,17,[23][24][25][26]. The speed and convenience of data analysis afforded by scripting contributed to the more widespread use of GC×GC in the environmental and petroleum fields. When developing scripts, finding the molecular ion is beneficial, as the subsequent expected neutral losses can be deduced from the molecular ion peak. The primary reason why scripts could be developed and used widely for environmental studies is due to the convenience in locating the molecular ion for the major compound classes of interest, such as halogenated species and aromatic compounds. Investigation of the molecular ion was performed as the fundamental step in many of the previously reported scripts for environmental samples [16,17,27]. Writing scripts becomes more challenging for compounds that generate a weak (or no) molecular ion, as a molecular ion usually forms the basis for scripting rules. TMS derivatives generally produce weak or undetectable molecular ion peaks due to the fast elimination of the substituent radical from the silicon of the molecular ion [28,29]. In addition, for TMS derivatized compounds, the trimethylsilyl moiety (m/z 73) is, if not the base peak, a major ion common to all TMS derivatives. This manuscript presents a suite of scripts developed for GC×GC-TOFMS metabolomics data with the aim of rapid screening of complex biological samples, which typically contain thousands of compounds, comprised of diverse compound classes. In this work, the scripts were developed using the scripting feature in ChromaTOF ® (v.4.72; LECO), one of the most used commercial software packages in the GC×GC-TOFMS community. However, the scripts presented herein should be equally applicable to data from other GC×GC-MS systems, possibly with minor adjustments to the abundance thresholds in the logical decision trees. The greatest advantage of the scripts presented herein is their reliance solely on mass spectral information (i.e., retention information is not considered). This makes the scripts versatile and applicable to any GC×GC-TOFMS data (likely well-resolved GC-TOFMS data also), regardless of the conditions used in the analytical run. The scripts were applied to standard mixtures of four different major classes of metabolites (amino acids, fatty acids, fatty acid methyl esters, and carbohydrates) at different concentrations. After validating the performance of classifying scripts with standards at low and high concentrations, the automated filtering scripts were applied to various derivatized and nonderivatized biosamples to evaluate their performance. To the best of our knowledge, this represents the first collection of automated filtering scripts for handling GC×GC-TOFMS data in metabolomics applications. Derivatization Materials HPLC grade methanol, HPLC grade toluene, and 99.9% pyridine were purchased from Millipore-Sigma Canada. Toluene was dried over anhydrous sodium sulfate (Millipore-Sigma Canada). Methoxyamine hydrochloride (Millipore-Sigma, Canada) solution was prepared in pyridine at a concentration of 20 mg/mL. Ampoules of N-methyl-N-(trimethylsilyl) trifluoroacetamide + 1% trichloromethylsilane (MSTFA + 1% TMCS), were purchased from Fisher Scientific Canada and opened immediately prior to use. Safe-Lock amber centrifuge tubes were purchased from Eppendorf Canada Ltd., while 2-mL glass GC vials, GC vials with integral 300 µL inserts, and GC vial caps (PTFE-faced silicon) were purchased from Chromatographic Specialities Inc. (Canada). Standard Mixtures To test the performance of the developed scripts at various concentrations (from LOD level to overload-the-column high), a mixture of amino acid standard mixture (AAS18-10 mL analytical standard, Millipore-Sigma Canada), fatty acid standard mixture (Nu-Check, MN, USA), fatty acid methyl esters standard mixture (SUPELCO 37 Component FAME Mix, Millipore-Sigma Canada), and a carbohydrates standard mixture (Carbohydrates kit, Millipore-Sigma Canada) were mixed at 18 different concentrations. The compounds of the standard mixture used in the experiment are listed in Supplementary Materials Table S1. The details of how the mixtures were prepared are also included in Supplementary Materials. A total of 108 compounds are in the mixture of standards, including 17 amino acids, 44 fatty acids, 37 fatty acid methyl esters, and 10 carbohydrates. Sample Preparation The performance of the scripts was evaluated using previously acquired data from a variety of sample types, including urine, plasma, algae, feces, and sweat [30,31]. Urine, plasma, algae samples, and standard mixtures were prepared by a typical two-step derivatization process of methoximation, followed by subsequent trimethylsilylation [32]. The details of sample preparation are included in Supplementary Materials. In brief, the sample was extracted with an organic solvent and dried under nitrogen. To the dried residue, 50 µL of 20 mg/mL methoxyamine hydrochloride in pyridine were added for methoximation and incubated at 60 • C for 2 h. Subsequently, 100 µL of MSTFA were added and incubated again at 60 • C for 1 h. SPME (Solid-Phase Microextraction) The volatiles from feces and sweat samples were extracted using a three-phase SPME fibre (CAR/DVB/PDMS) with sampling from the headspace [30,33]. Other details are in the publications presenting the data sets. GC×GC-TOFMS Analysis All GC×GC-TOFMS analyses were performed on a LECO Pegasus 4D system (Leco Instruments, St. Joseph, MI, USA), with an Agilent Technologies 7890 gas chromatograph (Palo Alto, CA, USA) equipped with a four-jet dual-stage liquid nitrogen cryogenic modulator. The samples were analyzed with different column combinations and different GC×GC and MS methods. The two GC×GC-TOFMS conditions used to evaluate the versatility of scripts in Section 3.2 are summarized in Table 1. Data Processing and Automated Classification All GC×GC-TOFMS data were processed using ChromaTOF ® (v.4.72) software from LECO. The baseline offset was set to 0.9, and the expected peak widths throughout the entire chromatographic run were set to 10 s for the first dimension and 0.15 s for the second dimension. The data were processed with a peak finding threshold of S/N 30:1. Peak finding and deconvolution of mass spectra were performed automatically as an embedded function of ChromaTOF ® . All chromatographic peaks were searched against the NIST MS Search v.2.3 (2017) and Wiley 08 libraries. The scripting option was enabled in ChromaTOF ® , allowing user-written scripts to be applied over the entire chromatographic space. The scripts were written with Microsoft VBScript language, a Visual Basic dialect. Scripting-Based Classifications and Evaluation The scripts used mass spectral information without any retention time information to locate the members of target compounds/classes. The scripts were written as a set of logical operations, incorporating the knowledge about mass spectral fragmentation for the class of compounds of interest. In general, the scripts presented in this study involve the following steps: the expected molecular ion of the family of compounds was calculated based on the molecular structure, probable neutral losses were subtracted from the calculated molecular ion, and then other prominent features (e.g., abundance of major fragments and low intensities for specific regions in the mass spectrum) were evaluated. For metabolites that are mostly non-halogenated species, isotopic ratios are not as useful as they are for the halogenated species in environmental studies. For a homologous series of metabolites, the expected molecular ion was calculated using the number of carbons in the alkyl chain. For example, for the class of normal saturated fatty acid methyl esters, the theoretical molecular ion was determined with Equation (1). From the calculated molecular ion, the subsequent fragment losses, such as [M-31] + , representing a loss of a methoxy group, as well as [M-43] + and [M-29] + , were investigated from a complex rearrangement. The ion at m/z 74 is the McLafferty rearrangement ion, which is the base peak for FAMEs; this confirms that it is indeed a methyl ester. For the case of TMS derivatives of saturated fatty acids, the molecular ion also was calculated, based on the number of carbons in the alkyl chain, with Equation (2). For TMS derivatives of fatty acids, the molecular ion is generally weak or absent due to its susceptibility to hydrolysis. Instead, [M-15] + , which represents the loss of a methyl group, is significantly abundant. In addition, m/z 73 and 75 are common to all TMS derivatives and usually are considerably abundant. In theory, the molecular ion should be the highest m/z aside from its isotopic cluster. To address the limitation of compounds with weak/nonexistent molecular ion peaks, the strategy of calculating the molecular ion, instead of searching for the appearance of the molecular ion from the mass spectra, was used. With this approach, the spectra that have a considerable number of high mass ions bigger than the expected molecular ion with significant intensities were programmed to be excluded from the classification. With logical operations, the intensities (ion counts) in the mass channel region above [M+2] + to the end mass of the mass spectrum were checked if they fall within the set tolerance for noise (i.e., 1% relative abundance of the base peak). This was done to reduce the chance of bigger molecules being falsely filtered due to common fragment ions or random chance. The scripts that were developed for use in this work can be found on GitHub at https://github.com/seolinnam/Scripts_Metabolomics (accessed on 14 June 2021). As additional metabolomics-related scripts are developed, they will be added to this repository. To evaluate the performance of the scripts, standard mixtures of four different classes of compounds (amino acids, fatty acids, fatty acids methyl esters, and carbohydrates) were prepared at various concentrations. Since the scripts rely entirely on the mass spectral information, the spectral quality is crucial for scripts to work reliably. Various concentrations of standard mixtures were prepared to test both ends of concentrations: (1) at extremely high concentrations, where the peaks are overloaded and saturating the detector, could result in distorted ion ratios, and (2) at very low concentrations, where the signals may be close to the noise level, consequently boosting the noise in mass spectra, leading to inaccurate ion ratios and fragmentation patterns. Eighteen mixtures of the standards at different concentrations were prepared by mixing the four different classes of standard mixtures at various concentrations. These standard mixtures were derivatized following the two-step methoximation and trimethylsilylation derivatization procedure. Results and Discussion Since GC×GC-TOFMS data provide both chromatographic and mass spectral information, scripts can be written using either or a combination of retention and spectral information. While mass spectral information is independent of the GC×GC-TOFMS method used, retention information depends on the column combination and the GC×GC method (temperature programming, flow rate) used for the analysis. Some published scripts involve retention information in the search algorithms to enhance the accuracy of the scripts for the compound classes that are challenging to distinguish with the mass spectral information alone. The scripts presented herein were written using only mass spectral information. This provides a significant advantage because these scripts are independent of separation parameters and can be applied to any GC or GC×GC-TOFMS chromatograms, regardless of the column combinations and GC and MS conditions used. Evaluation of Scripts The same data processing method with the in-house written scripts were applied to all 18 chromatograms using ChromaTOF ® . A peak table for each chromatogram was generated automatically from the software after the data processing was finished. The family group for the compounds that are classified by the scripts were displayed in the classification column in the peak table. The peak tables were sorted to prioritize classified compounds ( Table 2 and Table 3). The detailed results of how many compounds in the standard mixtures at various concentrations were classified for each group are included in the Supplementary Materials Table S2. Figure 1 shows the classified peaks using the "bubbles" feature, where the radii of the bubbles correspond to the relative areas of the represented peaks. Each class of compounds was assigned a different color. It is visually evident that the scripts struggled to classify peaks more at low concentrations ( Figure 1A) rather than at high concentrations ( Figure 1C). Figure 1C showed that while peaks may have been overloaded, the performance of the scripts was not significantly affected. For the low concentration, of 108 compounds in the standard mixture, only 14 compounds (8 saturated fatty acids, 2 monoenoic fatty acids, 1 dienoic fatty acid, and 3 carbohydrates) were classified using the scripts. In this work, the term limit of classification (LOC) is used to describe the lowest concentration where the scripts could correctly classify the compound. The LOC varied widely for different classes and even for different compounds within the same class due to the differences in the complexity of characteristic mass spectral features used in developing scripts. Depending on how unique the fragmentation of the target compound is, scripting can filter out the compounds of interest more or less effectively using the distinct mass spectral features. As an example, the LOC for a TMS derivative of arachidic acid was determined to be 85.1 pg on-column. The compound was detected with a signal-to-noise ratio of 387. The concentration of 85.1 pg on-column was the first occurrence of the compound that was classified correctly as a TMS derivative of a saturated fatty acid. The mass spectral match score for this compound at the lowest concentration (21.3 pg on column, S/N 64.22) was 464 for similarity and 732 for reverse ( Figure 2B), which was significantly lower than 857 for similarity and 918 for reverse in a higher concentration (2.13 ng on column, S/N 15089) standard (Figure 2A). compounds within the same class due to the differences in the complexity of characteristic mass spectral features used in developing scripts. Depending on how unique the fragmentation of the target compound is, scripting can filter out the compounds of interest more or less effectively using the distinct mass spectral features. As an example, the LOC for a TMS derivative of arachidic acid was determined to be 85.1 pg on-column. The compound was detected with a signal-to-noise ratio of 387. The concentration of 85.1 pg on-column was the first occurrence of the compound that was classified correctly as a TMS derivative of a saturated fatty acid. The mass spectral match score for this compound at the lowest concentration (21.3 pg on column, S/N 64.22) was 464 for similarity and 732 for reverse ( Figure 2B), which was significantly lower than 857 for similarity and 918 for reverse in a higher concentration (2.13 ng on column, S/N 15089) standard (Figure 2A). Low concentrations resulted in low-quality spectra with higher noise, which hindered the ability of the classifying scripts and resulted in some false negatives. To alleviate this issue, scripts were refined to eliminate any compound that has more than five prominent peaks (abundance greater than 1% of the base peak) beyond the [M+2] + peak. The threshold for tolerating noise in the higher masses above [M+2] + was calculated by taking the average of all the signals of masses above [M+2] + , and four standard deviations above the average was set as the threshold to discriminate the real signals versus noise. This small alteration slightly improved the result of classifying scripts at the lower concentrations; however, it increased the possibility of false-positives due to the increased flexibility for lower intensity peaks at the higher end of the mass spectrum. Nonetheless, as a proof-of-concept, testing the scripts on the standard mixtures at various concentrations revealed its fairly robust performance as an automated, convenient, and rapid screening tool. Once the scripts are written, the accuracy and leniency of such scripts can be tuned by the user by adding or removing specific features and adjusting the tolerance levels for ion ratios, based on need. Furthermore, the addition of retention time information may increase the accuracy of the scripts further. Since this would significantly sacrifice their versatility, we leave the inclusion of retention information to the individual laboratory, so that they can tune the scripts to their needs and their absolute retention times based on the columns and experimental conditions used in their laboratory. Versatility of Scripts The greatest advantage of the proposed scripts is their flexibility; they can be applied to any GC×GC-TOFMS chromatograms, regardless of the conditions used. To validate this benefit, a standard mixture of 37 FAMEs was analyzed using two different GC×GC-TOFMS conditions. The chromatograms are shown in Figure 3. The chromatogram in Low concentrations resulted in low-quality spectra with higher noise, which hindered the ability of the classifying scripts and resulted in some false negatives. To alleviate this issue, scripts were refined to eliminate any compound that has more than five prominent peaks (abundance greater than 1% of the base peak) beyond the [M+2] + peak. The threshold for tolerating noise in the higher masses above [M+2] + was calculated by taking the average of all the signals of masses above [M+2] + , and four standard deviations above the average was set as the threshold to discriminate the real signals versus noise. This small alteration slightly improved the result of classifying scripts at the lower concentrations; however, it increased the possibility of false-positives due to the increased flexibility for lower intensity peaks at the higher end of the mass spectrum. Nonetheless, as a proof-of-concept, testing the scripts on the standard mixtures at various concentrations revealed its fairly robust performance as an automated, convenient, and rapid screening tool. Once the scripts are written, the accuracy and leniency of such scripts can be tuned by the user by adding or removing specific features and adjusting the tolerance levels for ion ratios, based on need. Furthermore, the addition of retention time information may increase the accuracy of the scripts further. Since this would significantly sacrifice their versatility, we leave the inclusion of retention information to the individual laboratory, so that they can tune the scripts to their needs and their absolute retention times based on the columns and experimental conditions used in their laboratory. Versatility of Scripts The greatest advantage of the proposed scripts is their flexibility; they can be applied to any GC×GC-TOFMS chromatograms, regardless of the conditions used. To validate this benefit, a standard mixture of 37 FAMEs was analyzed using two different GC×GC-TOFMS conditions. The chromatograms are shown in Figure 3. The chromatogram in Figure 3C is a FAME extract from algae, which was analyzed using the same conditions as for the chromatogram in Figure 3B. The chromatograms were processed using the same data processing method with scripts for FAMEs. Table 4 shows the results for the two conditions and the algae extracts. Figure 3C is a FAME extract from algae, which was analyzed using the same conditions as for the chromatogram in Figure 3B. The chromatograms were processed using the same data processing method with scripts for FAMEs. Table 4 shows the results for the two conditions and the algae extracts. Total 34 21 49 Saturated 15 11 18 Monoenoic 10 6 11 Dienoic 2 2 11 Trienoic 4 2 6 Multienoic 3 0 3 With Condition 1, thirty-four FAMEs were identified ( Figure 3A). However, methyl butyrate and methyl hexanoate (short-chain FAMEs) were missed in classification because they eluted before the solvent delay time under the GC×GC TOFMS conditions that were used to acquire the chromatogram. One isomer each of C18:2 and C20:2 were not detected, as the peaks were not well resolved in the region of the chromatogram with other nearby abundant peaks ( Figure 3D). Both cis-and trans-forms of methyl eicosenoate (C20:1) were detected and classified correctly as monoenoic FAMEs, although the SUPELCO certificate only mentions the presence of methyl cis-11-eicosenoate ( Figure 3D). Twenty-one FAMEs were identified in Condition 2 correctly from C6 to C18; the last compound to be able to be eluted with the given GC×GC TOFMS condition was C18 FAME due to the temperature limitation with an insufficient hold time at the end of the run. The use of the PEG phase in the second dimension restricted the maximum temperature to 230 • C in 1 D and 245 • C in 2 D. Among the detected peaks in the range of C6 to C18 FAMEs, all twenty-one FAMEs that are in the standard mixture were correctly classified into the corresponding groups. The scripts applied to algae extracts classified forty-nine FAMEs from 981 peaks in approximately 10 min of data processing time. All 49 peaks were identified correctly, despite the evidently overloaded peaks towards the end of the chromatogram (C16 and C18 FAMEs). Algae extract results showed that the scripting tool allowed rapid screening and provided a general understanding of the composition of the sample in a short time. The scripting tool enables quick visualization of the location of members of target classes of compounds, while simultaneously offering a rough visual estimation of the concentration of the compounds. Filtering of Peak Table by Scripts After the evaluation of the scripts, they were applied to four different real-world samples that were prepared with two major sample preparation methods for metabolomics studies, SPME, and TMS derivatization. Sweat and fecal samples were prepared with SPME, a method for volatile analysis without derivatization, whereas plasma and urine were prepared with a two-step methoximation/trimethylsilylation derivatization. The four different samples were analyzed with different GC×GC-TOFMS conditions, each with different column configurations. All four acquired chromatograms ( Figure 4) were processed with the same scripts, without any special treatment to the data, such as artifact removal (column bleed), in order to fully assess the power of the scripts. The classes of metabolites that were used in the scripts were aldehydes, alcohols, ketones, free fatty acids, fatty acid methyl esters, fatty acid ethyl esters, and isopropyl esters to target for non-derivatized compounds and trimethylsilyl esters of amino acids, fatty acids, sugars, other organic acids, and sterols for the TMS derivatives. The raw chromatograms of each sample contained thousands of peaks, which make reviewing the data and getting useful information and interpretation of the data a challenge. After the scripting filters were applied, the peak tables were reduced from the original thousands of peaks detected to a few dozen classified compounds, which makes the manual revision of the data more realistic and convenient. The classified peaks for each sample were reviewed manually to verify the accuracy of the scripts. In the process of the manual revision, true-positives (TPs) and false-positives (FPs) for classification were determined. TPs indicate the compounds that are correctly classified to the corresponding class, whereas FPs represent the compounds that are incorrectly assigned to the class. The number of TPs and FPs for each class of compound was counted for the evaluation of the scripts. To confirm the identity of a peak, the entire mass spectrum of each classified analyte was examined against a library. The ordered structure in GC×GC also helped the compound identification by diagnosing the relative position of the compound in chromatographic space, especially for homologous series of compounds. Table 5 shows the results of the scripts applied to four different samples. The number of compounds that were classified correctly into the corresponding class was counted, and TPs were recorded, with the number of FPs recorded in parenthesis. Overall, the scripts displayed high accuracy, given that the samples were analyzed with different GC×GC-TOFMS methods. It is noteworthy that for the samples that were extracted with SPME, no compounds were classified as TMS derivatives. On the other hand, for derivatized samples, no compounds were classified as alcohols or free fatty acids, which would have been trimethylsilylated. Although it is not practical to examine every single peak in a sample that contains thousands of peaks to assess the occurrence of true negatives and false negatives, the fact that no compound classified as TMS derivatives for SPME and vice versa for TMS-derivatized samples, provides a strong indication that the scripts are reliable. Even with the powerful separation efficiency of GC×GC, coelution is inevitable for complex biological samples. As an example, in the plasma sample, methionine-2TMS and aspartic acid-3TMS coeluted almost perfectly in both the first and second dimensions ( Figure 5A). As both are TMS derivatives (i.e., m/z 73 is a common ion in the derivatized products), it must have been difficult to distinguish them as two different peaks even with EIC of m/z 73 and would have been easily missed without careful examination of the data. However, with the scripting tool, they were identified as two distinct compounds, despite their vast difference in intensities, where methionine-2TMS could be obscured by aspartic acid-3TMS ( Figure 5B). However, with the scripting tool, they were identified as two distinct compounds, despite their vast difference in intensities, where methionine-2TMS could be obscured by aspartic acid-3TMS ( Figure 5B). Applying Cached Scripts There are two ways that scripts can be applied to GC×GC data in ChromaTOF ® . Scripts described so far were used to classify chromatographic peaks that match specific spectral criteria in a peak table. Another manner in which scripts can be applied in ChromaTOF ® is with so-called cached scripts. The script function with a cached script returns a numeric value, which is cached into a calculated "ion trace" during data processing. After the data is processed, the cached ion traces are plotted and the responses of only the target analytes that match the particular spectral criteria are shown. As an example, cached scripts for TMS esters of saturated fatty acids were applied to a derivatized standard mixture. The standard mixture contained over 1000 peaks. Figure 6 presents the comparison between TIC and the cached script result. The EIC using mass channel m/z 73 would not be sufficiently selective in derivatized biological samples since m/z 73 is a common mass in derivatized products. Using the cached scripts, the TOFMS can be transformed into a selective detector for spectra with the desired characteristics, and the surface plot showing only the target analytes allows rapid screening of samples. Applying Cached Scripts There are two ways that scripts can be applied to GC×GC data in ChromaTOF ® . Scripts described so far were used to classify chromatographic peaks that match specific spectral criteria in a peak table. Another manner in which scripts can be applied in ChromaTOF ® is with so-called cached scripts. The script function with a cached script returns a numeric value, which is cached into a calculated "ion trace" during data processing. After the data is processed, the cached ion traces are plotted and the responses of only the target analytes that match the particular spectral criteria are shown. As an example, cached scripts for TMS esters of saturated fatty acids were applied to a derivatized standard mixture. The standard mixture contained over 1000 peaks. Figure 6 presents the comparison between TIC and the cached script result. The EIC using mass channel m/z 73 would not be sufficiently selective in derivatized biological samples since m/z 73 is a common mass in derivatized products. Using the cached scripts, the TOFMS can be transformed into a selective detector for spectra with the desired characteristics, and the surface plot showing only the target analytes allows rapid screening of samples. However, with the scripting tool, they were identified as two distinct compounds, despite their vast difference in intensities, where methionine-2TMS could be obscured by aspartic acid-3TMS ( Figure 5B). Applying Cached Scripts There are two ways that scripts can be applied to GC×GC data in ChromaTOF ® . Scripts described so far were used to classify chromatographic peaks that match specific spectral criteria in a peak table. Another manner in which scripts can be applied in ChromaTOF ® is with so-called cached scripts. The script function with a cached script returns a numeric value, which is cached into a calculated "ion trace" during data processing. After the data is processed, the cached ion traces are plotted and the responses of only the target analytes that match the particular spectral criteria are shown. As an example, cached scripts for TMS esters of saturated fatty acids were applied to a derivatized standard mixture. The standard mixture contained over 1000 peaks. Figure 6 presents the comparison between TIC and the cached script result. The EIC using mass channel m/z 73 would not be sufficiently selective in derivatized biological samples since m/z 73 is a common mass in derivatized products. Using the cached scripts, the TOFMS can be transformed into a selective detector for spectra with the desired characteristics, and the surface plot showing only the target analytes allows rapid screening of samples. Conclusions Due to the complexity and the amount of data acquired from GC×GC-TOFMS analyses, data handling has been a significant challenge, especially with metabolomics data. In this work, scripting algorithms for numerous classes of metabolites were written using
2021-07-26T00:05:32.937Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "b741da725f800f3b338d2ccb80324e1b87a41757", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2297-8739/8/6/84/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "bb0d3e53d9a6b28164791d1a6ba29b879a43ebea", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
249378700
pes2o/s2orc
v3-fos-license
Immunogenicity and safety of NVSI-06-07 as a heterologous booster after priming with BBIBP-CorV: a phase 2 trial The increased coronavirus disease 2019 (COVID-19) breakthrough cases pose the need of booster vaccination. We conducted a randomised, double-blinded, controlled, phase 2 trial to assess the immunogenicity and safety of the heterologous prime-boost vaccination with an inactivated COVID-19 vaccine (BBIBP-CorV) followed by a recombinant protein-based vaccine (NVSI-06-07), using homologous boost with BBIBP-CorV as control. Three groups of healthy adults (600 individuals per group) who had completed two-dose BBIBP-CorV vaccinations 1–3 months, 4–6 months and ≥6 months earlier, respectively, were randomly assigned in a 1:1 ratio to receive either NVSI-06-07 or BBIBP-CorV boost. Immunogenicity assays showed that in NVSI-06-07 groups, neutralizing antibody geometric mean titers (GMTs) against the prototype SARS-CoV-2 increased by 21.01–63.85 folds on day 28 after vaccination, whereas only 4.20–16.78 folds of increases were observed in control groups. For Omicron variant, the neutralizing antibody GMT elicited by homologous boost was 37.91 on day 14, however, a significantly higher neutralizing GMT of 292.53 was induced by heterologous booster. Similar results were obtained for other SARS-CoV-2 variants of concerns (VOCs), including Alpha, Beta and Delta. Both heterologous and homologous boosters have a good safety profile. Local and systemic adverse reactions were absent, mild or moderate in most participants, and the overall safety was quite similar between two booster schemes. Our findings indicated that NVSI-06-07 is safe and immunogenic as a heterologous booster in BBIBP-CorV recipients and was immunogenically superior to the homologous booster against not only SARS-CoV-2 prototype strain but also VOCs, including Omicron. INTRODUCTION The epidemic of coronavirus disease 2019 , caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has stimulated global efforts to develop safe and effective vaccines against the rapid spread of the virus. So far, great progress has been achieved, and a total of ten vaccines have been approved by the world health organization (WHO) for emergency use, including three inactivated, two mRNA-based, three viral vector-based and two recombinant nanoparticle protein-based vaccines (https://www.who.int/teams/regulation-prequalification/ eul/covid -19). These COVID-19 vaccines have shown to offer effective protections against severe disease, hospitalization and death. 1 According to the published data from clinical trials, the efficacy of several leading vaccines such as BNT162b2, ChAdOx1, Ad26.COV2.S, mRNA-1273, BBIBP-CorV, CoronaVac and NVX-CoV2373 were reported to be 95.0%, 70.4%, 67%, 94.1%, 78.1%, 51.0-83.5% and 90.4%, respectively. 2,3 Among these vaccines, the inactivated vaccine BBIBP-CorV produced by Sinopharm has been used in large-scale populations worldwide, and many studies have demonstrated the effectiveness of this vaccine against the wild type SARS-CoV-2 and its variants. [4][5][6][7][8] However, due to the waning of neutralization titer over time in vaccinated individuals and emergence of SARS-CoV-2 variants such as Omicron and Delta, breakthrough infection cases continuously increase, 9,10 which raises the urgent need of new strategies to cope with this problem. Booster vaccination may be an effective way to improve waning immunity and broaden protective immune responses against SARS-CoV-2. The clinical trials in adults who have received the two-dose primary vaccination series with mRNA-1273 or BNT162b2 vaccines showed that a booster injection of the same vaccine, six to eight months later, yielded 3.8-to 7-fold higher neutralizing antibody titers against the wild-type virus compared to the peak value after the primary series. [11][12][13] Besides the homologous boosting, heterologous booster strategy has also attracted great concerns, and multiple clinical trials and cohort studies have shown that the immune response elicited by heterologous prime-booster vaccination was significantly greater than that induced by homologous counterparts. [14][15][16][17][18][19][20][21] Currently, several clinical trials have been conducted to evaluate the safety, immunogenicity and efficacy of a heterologous booster dose of recombinant subunit vaccines, such as V-01 (NCT05096832), ZF2001 (NCT05205096, NCT05205083) and SCB-2019 (NCT05087368), following two-dose inactivated vaccines. Some preliminary study results have demonstrated that the heterologous booster of recombinant protein subunit vaccines distinctly improved the neutralizing antibody level [17][18][19][20][21] and protective efficacy (https://en.livzon.com.cn/companyfile/1029.html) against various SARS-CoV-2 strains, including the Omicron variant, which was immunogenically superior to the homologous booster of inactivated vaccines. [17][18][19][20][21] Based on structural and computational analysis of spike receptor-binding domain (RBD) of SARS-CoV-2, we have designed a recombinant COVID-19 vaccine (CHO cells), named NVSI-06-07, that uses a homologous trimeric form of RBD (homo-tri-RBD) as the antigen. In homo-tri-RBD, three RBDs, derived from the prototype SARS-CoV-2 strain, were connected end-to-end into a single molecule by using their own long loops at the N-and C-terminus without introducing any exogenous linker, which were then co-assembled into a trimeric structure. 22 The safety and immunogenicity of this vaccine have been evaluated in the phase 1/2 clinical trial conducted in China. The interim analysis results showed that the immunogenicity of NVSI-06-07 was comparable to other recombinant protein-based COVID-19 vaccines, and no vaccine-related serious adverse events were reported in the trial (ClinicalTrials.gov number: NCT04869592, data not yet published). We sought to know whether the use of NVSI-06-07 as a heterologous booster vaccination can effectively improve the immune responses in the inactivated vaccine recipients. Here, we report the immunogenicity and safety of heterologous booster vaccination with NVSI-06-07 at pre-specified time intervals in individuals who have previously received two doses of the inactivated vaccine BBIBP-CorV, which were then compared to those of homologous boosting strategy with a third dose of BBIBP-CorV. Moreover, as an exploratory study, the live-virus neutralization activities of the vaccinated sera were also evaluated against Omicron and other SARS-CoV-2 variants of concern (VOCs). Study participants Healthy adults aged ≥18 yrs who received a full regimen (two doses) of BBIBP-CorV 1-3 months, 4-6 months and ≥6 months (maximum: 12.7 months, median: 7.3 months) ago, respectively, were recruited as shown in Fig. 1. For these three groups with different boosting intervals, a total of 1800 participants, with 600 of each group, form the United Arab Emirates (UAE) took part in the trial. For each group, participants were randomly assigned to receive either a heterologous booster vaccination with NVSI-06-07 or a homologous booster with a third dose of BBIBP-CorV (Fig. 1). Demographic characteristics were similar between the heterologous and homologous boosting groups. The participants in the two groups exhibited balanced distributions in age, sex, height and body weight ( Table 1). The nationality of participants was provided in Supplementary Table 1. All the 1800 participants receiving booster vaccination were included in safety set (SS) for safety analysis. A total of 1672 participants completed the followup visit on day 14, and these individuals were included in perprotocol set 1 (PPS1) for day 14 immunogenicity analysis. A total of 1496 participants completed day 28 visit, which were included in per-protocol set 2 (PPS2) for day 28 immunogenicity analysis ( Fig. 1). The immunogenic superiority of heterologous NVSI-06-07 booster to homologous BBIBP-CorV booster was further confirmed by neutralizing antibody response measured with live-virus neutralization assays. Before booster vaccination, most of the participants had detectable neutralizing activities against prototype SARS-CoV-2 and showed a comparable level between two boosting groups in the pre-booster neutralizing antibodies. The pre-booster neutralizing antibody GMT of participants in the group of over-6-month boosting-interval was about half of the values in the 4-6-month group, indicating wanning of neutralizing antibody responses over time (Table 3). On day 14 after the boost, the neutralizing antibody titers against prototype SARS-CoV-2 live virus were significantly improved in both the heterologous and homologous boosting recipients. In homologous boosting participants, the seroconversion rates in 1-3-month, 4-6-month and ≥6-month boosting-interval groups were 39.26% (95%CI, 33.40%-45.36%), 26.90% (21.88%-32.39%) and 52.98% (47.01%-58.89%), respectively, whereas they were 81.65% (76.47%-86.10%), 86.38% (81.79%-90.18%) and 86.83% (82.31%-90.56%) for the heterologous boost ( Table 3). The seroconversion rates induced by heterologous boost were significantly higher (P < 0.0001) than those induced by homologous boost (Table 3). Compared with the pre-boosting baseline level, the homologous boost vaccination elicited 3.41-fold (95%CI, 2.90-4.00) higher neutralizing GMTs against prototype SARS-CoV-2 in 1-3-month boosting-interval group, 2.58-fold (95% CI, 2.21-3.00) higher in 4-6-month group and 7. 36 (Table 3). Both on day 14 and 28 after the boost, neutralizing antibody levels improved by heterologous booster were much higher (P < 0.0001) than those by homologous booster, indicating that NVSI-06-07 is immunologically preferred as a booster choice over BBIBP-CorV (Table 3). Comparison among three groups with different prime-boosting intervals by using covariance analysis models showed that the ≥6 months groups have a significantly higher increase (P < 0.05) in neutralizing GMTs than the 1-3 months and 4-6 months groups both for heterologous and homologous boosts (Supplementary Table 2). In order to investigate whether the immune response elicited by booster vaccination was age-dependent, participants in each group were divided into different age subgroups and immunogenicity data was compared between these subgroups (Supplementary Tables 3-10). Statistical analysis showed that both RBDbinding IgG GMCs and neutralizing antibody GMTs induced by heterologous booster were comparable between different age subgroups (P > 0.05), except the IgG GMCs between 18-44 yrs and ≥45 yrs subgroups in 1-3 months group (P = 0.0467). These results indicated that the heterologous NVSI-06-07 booster exhibited similar immunogenicity across different age subgroups. However, it should be noted that the number of older participants was much smaller than that of younger participants in the trial. Immunogenicity of the NVSI-06-07 booster in elderly population should be further assessed in the future. Cross-reactive immunogenicity against main SARS-CoV-2 VOCs including Omicron Serum samples of 192 participants with sequential enrollment numbers in ≥6-month boosting-interval group (half boosted with homologous vaccination and the other half boosted with heterologous vaccination) were used to evaluate the neutralizing sensitivities to the Omicron variant using live-virus neutralization assays. In participants boosted with a third dose of BBIBP-CorV, neutralizing antibody GMT against Omicron was substantially reduced by 11.32 folds on day 14 post-boost compared with that against prototype SARS-CoV-2 strain, implying substantial escape of the Omicron variant from the antibody neutralization response elicited by BBIBP-CorV. By comparison, in participants receiving heterologous boost of NVSI-06-07, neutralizing antibody GMT against Omicron only declined by 6.62 folds, as shown in Fig. 2. Neutralizing antibody GMT against Omicron elicited by heterologous boost was 292.53 (95%CI, 222.81-384.07), which was significantly higher than 37.91 (95% CI, 30.35-47.35) induced by homologous boost. Heterologous prime-booster vaccination with BBIBP-CorV followed by NVSI-06-07 demonstrated much more robust neutralizing activities against Omicron compared with homologous prime-boost vaccination with three doses of BBIBP-CorV. We also evaluated the immune response of booster vaccinations against other SARS-CoV-2 VOCs, including Alpha, Beta and Delta by using the subset of serum samples from ≥6-month boosting-interval group. On day 14 after boosting with BBIBP-CorV, the neutralizing antibody GMTs against Alpha, Beta and Delta showed 2.32, 2.61 and 2.05 folds decrease compared to that against prototype strain. All these three VOCs exhibited less sensitivities to sera neutralization, among which Beta variant showed the largest reduction in neutralization sensitivity. By comparison, the sera from the participants boosted with NVSI-06-07 showed only 1.30, 1.21 and 1.60 folds reduction in neutralization of the Alpha, Beta and Delta variants, respectively. By heterologous booster vaccination, neutralizing antibody GMTs against these three VOCs were 1492. 24 (Fig. 2). The heterologous boost elicited much higher neutralizing activities against the tested VOCs. Safety For safety analysis, four participants reported serious adverse events (SAEs) within 30 days after the boost, two of whom occurred in homologous booster group, and the other two was reported in heterologous booster group. None of these SAEs was related to the tested vaccines as assessed by the investigator (Supplementary Table 11). Besides, no adverse event of special The ratio of adjusted GMT between two groups was calculated by "NVSI-06-07/ BBIBP-CorV" , and the non-inferiority threshold of ratio between groups was set to 0.67 c Covariance analysis with least square method was used to calculate the adjusted GMT and P value d Seroconversion was defined as more than or equal to 4-fold rise form baseline in neutralizing antibody titer e Rate difference = (NVSI-06-07) − (BBIBP-CorV). Rate difference and 95%CI were estimated by CMH method considering stratification factors Immunogenicity and safety of NVSI-06-07 as a heterologous booster after. . . Kaabi et al. adverse reaction was 177 (19.64%). No statistically significant difference was observed in the occurrence of adverse reactions between these two groups (P = 0.6805) (Supplementary Table 12). The number of individuals reporting any unsolicited adverse event relevant to vaccination was 67 (7.45%) and 66 (7.33%) in heterologous and homologous boosting groups, respectively, within 30 days after booster vaccination (P = 0.9285) (Supplementary Table 12). These reported unsolicited adverse reactions were all ranked as grades 1 or 2. No adverse reaction was reported within 30 min. For solicited adverse reactions collected within 7 days after the boost, most of the local and systemic adverse reactions were graded as 1 (mild) or 2 (moderate) in both heterologous and homologous boosting groups, except for grade 3 systemic fever reported by 1 participant (0.11%) in heterologous boosting group and 3 participants (0.33%) in homologous boosting group (Fig. 4 and Supplementary Table 12). The most common injection site adverse reaction within 7 days was pain, reported in 70 (7.79%) subjects in the NVSI-06-07 boosting recipients and 47 (5.22%) in the BBIBP-CorV boosting recipients. Only the pain of grade 1 occurred in NVSI-06-07 booster groups was higher than that in BBIBP-CorV booster groups (P = 0.0237), and for all the other local adverse reactions, there was no statistically significant difference between these two boosting schemes (P > 0.05) (Fig. 3 and Supplementary Table 12). The most common systemic adverse reactions were headaches, muscle pain (non-inoculation site), fatigue and fever, which were reported in 48 (5.34%), 45 (5.01%), 27 (3.00%) and 18 (2.00%) participants in NVSI-06-07 boosting recipients, and 56 (6.22%), 41 (4.55%), 38 (4.22%) and 21 (2.33%) in the BBIBP-CorV boosting recipients. No statistically significant differences were observed in systemic adverse reactions between the heterologous and homologous boosting groups (P > 0.05), except that grade 1 fatigue reported in BBIBP-CorV booster groups was higher than that in NVSI-06-07 booster groups (P = 0.0373). (Fig. 4 and Supplementary Table 12). DISCUSSION Findings from this trial showed that both heterologous boost with NVSI-06-07 and homologous boost with BBIBP-CorV were immunogenic in the BBIBP-CorV recipients, but the immunogenicity of heterologous boost was much greater than that of homologous boost. The fold increases in both IgG GMCs and neutralizing antibody GMTs from the corresponding baseline were significantly higher after heterologous boost than those after homologous boost. Especially, for adults primed with BBIBP-CorV over 6 months ago, a 63.85-fold increase in neutralizing antibody GMTs was obtained by heterologous boost, in comparison to 16.78 folds by homologous boost. Compared with the peak value of neutralizing antibody titers primed with two doses of BBIBP-CorV as reported in the previous literature, 23 neutralizing GMTs boosted by a third dose of BBIBP-CorV were improved by 2.09-3.85 folds on 28 days after the boost, while boosting with NVSI-06-07 induced significant 6.94-13.34-fold increases over the peak value, implying that the neutralizing antibody responses were substantially amplified by heterologous booster vaccination. Among the three tested groups with different prime-boosting intervals, the pre-booster neutralizing antibody level in the ≥6-month group was the lowest, indicating the waning of immunity over time before boosting. However, this group had much higher post-booster neutralizing titers than the other two groups with shorter prime-boosting intervals. The better immune response of a longer prime-boost interval is probably due to additional antibody maturation with increased antibody avidity. Similar observations were also reported in the booster shot of ZF2001, ChAdOx1, and BNT162b2 COVID-19 vaccines. [24][25][26] The phenomenon of higher immunogenicity after a wider prime-boost interval has been well recognized in other viral and bacterial vaccines, such as influenza, human papillomavirus, Ebola, DTP (Pertussis, Diphtheria, Tetanus) and Polio vaccines. [27][28][29][30] However, different from the neutralizing antibodies, the RBD-binding IgG level was not increased with longer prime-boost intervals. The results implied that wider dose spacing may contribute to the maturation of neutralizing antibodies, but may have little effects on non-neutralizing antibodies. In addition, our study also showed that the heterologous NVSI-06-07 booter exhibited similar immunogenicity in both older and younger adults. However, due to the number of older participants in the trial was far less than younger participants, this finding should be further validated in the future. The overall occurrence of adverse reactions was low in both heterologous and homologous boost vaccinations. Most of reported adverse reactions were graded as mild or moderate with the most common symptoms of injection-site pain, headaches, muscle pain (non-inoculation site), fatigues and fever. Reactogenicity of the booster vaccinations was similar to that of the priming vaccinations described in the previously published literatures, 23 and there was no obvious difference in overall safety between heterologous and homologous boosts. The heterologous prime-boost combinations among viral vector COVID-19 vaccines, inactivated vaccines and mRNA vaccines have been proved to be able to significantly improve immune responses, and heterologous boost was more immunogenic than homologous boost. 14-16 All possible prime-boost combinations among Ad26. CoV2.S, mRNA-1273 and BNT162b2 vaccines showed that the neutralizing antibody titer was improved by 4-20 fold after homologous boost and 6 to 73-fold after heterologous boost. 16 A heterologous booster dose of Convidecia after two doses of CoronaVac elicited a 78.3-fold rise in neutralizing antibody titers, whereas only a 15.2-fold increase was obtained for homologous CoronaVac booster. 15 The anti-spike IgG antibody concentrations in CoronaVac recipients were improved by 12-fold for homologous boost, and 152, 90 and 77-fold for heterologous BNT162b2, ChAdOx1 and AD26.COV2-S boosts, respectively. 31 Seven different COVID-19 vaccines (ChAdOx1, BNT162b2, mRNA1273, NVX-CoV2373, Ad26. COV2.S, CVnCoV and VLA2001) as a booster dose following two doses of ChAdOx1 or BNT162b2 induced 1.3-32.3-fold increase in anti-spike IgG levels. 32 Two small-scale, open-label studies showed that a booster dose of ZF2001 in participants primed with two-dose inactivated vaccines induced 33.9-75.6-fold increases in neutralizing antibody titers. [17][18][19] Another two booster vaccination studies illustrated that the pseudo-virus neutralizing antibody titers against wild-type SARS-CoV-2 strain and Omicron variant elicited by ZF2001 booster following two-dose inactivated vaccines were 2.2-3.3-fold and 1.6-2.5-fold higher, respectively, than those induced by a homologous booster of inactivated vaccines. 20,21 A nationwide cohort study conducted in Sweden showed that the effectiveness against symptomatic COVID-19 infection was 67% and 79% for the heterologous ChAdOx1/BNT162b2 and ChAdOx1/mRNA-1273 primeboost vaccinations, respectively, which were higher than 50% of the homologous ChAdOx1 vaccination. 14 The preliminary analysis results of a phase III clinical trial demonstrated that after a heterologous booster vaccination of the recombinant protein subunit vaccine V-01 in inactivated vaccine recipients, the person-year incidence rate of SARS-CoV-2 infections was reduced from 12.80% to 6.73%, and the absolute protective efficacy was 61.35% (https://en.livzon.com.cn/ companyfile/1029.html). The results of this trial support the conclusion that the heterologous BBIBP-CorV/NVSI-06-07 primeboost vaccination scheme serves as another heterologous boosting strategy to better combat SARS-CoV-2. BBIBP-CorV has been approved by many counties for emergency use or conditional marketing, and large-scale populations in the world have completed the primary series of BBIBP-CorV. Considering its high effectiveness and low side-effects, NVSI-06-07 could act as a booster shot to top up immunity against SARS-CoV-2. Our study showed that heterologous NVSI-06-07 boost not only substantially increased neutralization activity, but also improved the breadth of neutralizing response. Compared to homologous boost with a third dose of BBIBP-CorV, significantly higher neutralizing antibody responses against SARS-CoV-2 VOCs, including Omicron, Alpha, Beta and Delta, were achieved by heterologous boost with NVSI-06-07. The results were consistent with other studies. 17,18 Especially, owing to high transmissibility and immune-escape capability [33][34][35] the Omicron variant has rapidly spread around the world. However, Omicron-specific vaccine is still not available and other strategies are urgently needed to control the pandemic of this variant. Considering that BBIBP-CorV has been applied in large-scale populations and the BBIBP-CorV/ NVSI-06-07 prime-booster vaccination can elicit a certain level of neutralizing antibodies against Omicron, this heterologous primebooster vaccination might serve as a possible strategy combating Omicron. Many studies have revealed that the levels of neutralizing antibody response were highly correlated with the real-world protection efficacy of the COVID-19 vaccines. [36][37][38][39][40][41] According to the previously determined threshold indicative of reduced risks of symptomatic infection, 41 heterologous prime-boost vaccination of BBIBP-CorV combined with NVSI-06-07 might provide protective effects against SARS-CoV-2 in the real-world. This study has limitations. First, for the volunteers enrolled in the trial, the number of men was much larger than that of women, and thus the data did not well represent the immune effects on women. Second, the proportion of older individuals aged ≥60 yrs in the participants was small, and the immunogenicity of NVSI-06-07 booster in elderly population should be further assessed in the future. Third, the reference serum used in our live-virus neutralization assays was not the WHO international standard reference material, and the reported results of neutralizing antibody titers were not converted to the international unit. Fourth, data on immune persistence of the booster vaccination is not yet available, and we will report the results once the data have been completed and analyzed. Finally, the cellular immunity was not evaluated in this trial. In summary, heterologous booster vaccination with NVSI-06-07 in BBIBP-CorV recipients was well tolerated and immunogenic against not only SARS-CoV-2 prototype strain but also the VOCs including Omicron, which supported the approval of emergency use of this heterologous booster strategy. MATERIALS AND METHODS Trial design and participants This trial was designed as a phase 2, randomised, double-blinded, controlled trial conducted at a single clinical site in UAE. Eligible participants were healthy adults, aged ≥18 yrs old, who had previously received a full series (two doses) of BBIBP-CorV, a COVID-19 inactivated vaccine. Three groups of participants, receiving their second dose of BBIBP-CorV 1-3 months, 4-6 months or at least 6 months ago, respectively, were enrolled with 600 individuals per group. Female volunteers were not pregnant or breastfeeding, and appropriate contraceptive measures had been taken within 2 weeks before enrollment. Participants needed to understand the trial procedures and were willing to complete the follow-up visits. Participants were screened for health status by inquiry and physical examination, prior to enrollment. Confirmed, suspected or asymptomatic cases of COVID-19 were excluded from the trial. Volunteers who had a history of SARS or MERS infection, or received any COVID-19 vaccine other than the inactivated vaccine BBIBP-CorV were also excluded. Other exclusion criteria include axillary temperature ≥37.3°C (forehead temperature ≥37.8°C); a history of severe allergic reactions to previous vaccinations, or allergy to any components of the vaccine; severe respiratory disease, severe liver and kidney diseases; hypertension (systolic blood pressure ≥150 mmHg, diastolic blood pressure ≥90 mmHg), diabetic complications, malignant tumors, various acute diseases or acute attacks of chronic diseases; congenital or acquired immunodeficiency, HIV infection, lymphoma, leukemia or other autoimmune diseases; a history or family history of convulsions, epilepsy, encephalopathy, infectious diseases or mental illness; congenital malformation or developmental disorder, genetic defect, severe malnutrition; a history of coagulation dysfunction (e.g. coagulation factors deficiency and coagulation diseases); asplenia or splenectomy, functional asplenia caused by any situation; under anti-TB (tuberculosis) treatment; receipt of immunoenhancement or inhibitor therapy within 3 months (continuous oral or IV administration for more than 14 days); receipt of other vaccines within 14 days; receipt of blood products within 3 months or other investigational drugs within 6 months; and other situation judged by the investigators that were not suitable for this trial. The detailed inclusion and exclusion criteria can be found on ClinicalTrials.gov (NCT05033847) or in the study protocol (Supplementary Protocol). The trial protocol was reviewed and approved by Abu Dhabi Health Research and Technology Ethics Committee. The trial was performed in accordance with Good Clinical Practice (GCP), Declaration of Helsinki (with amendments) as well as the local legal and regulatory requirements, and trial safety was overseen by an independent safety monitoring committee. Written informed consent was provided for all participants prior to inclusion into the trial. Randomisation and masking Randomisation was performed using an interactive web response system (IWRS). The randomisation list of participants was generated by the stratified blocked randomization method using SAS software (version 9.4), in which stratification was made according to different time intervals between the second priming dose of BBIBP-CorV and the booster dose, i.e., 1-3 months, 4-6 months and ≥6 months. Within each stratum, participants were randomised using a block randomisation method, with a block size of 10, in a ratio of 1:1 to receive either a heterologous booster dose of NVSI-06-07 or a homologous booster dose of BBIBP-CorV. A vaccine randomisation list with a randomisation block size of 10 was also generated by SAS software. Both the participant and vaccine randomisation lists were inputted into IWRS. At the trial site, according to the randomisation number and the corresponding vaccine number obtained from IWRS, participants were vaccinated accordingly. The trial is double-blind to avoid introducing bias by having randomization and masking process handled by independent personnel from trial operation. Participants, investigators and other staffs remained blinded to individual treatment assignment during the trial. Studied vaccines NVSI-06-07, a recombinant COVID-19 vaccine (CHO cells), encoding a homologous trimeric form of RBD (homo-tri-RBD), was developed by the National Vaccine and Serum Institute (NVSI) and manufactured by Lanzhou Institute of Biological Products Co., Ltd. (LIBP) in accordance with good manufacturing practice (GMP). Homo-tri-RBD was composed of three RBDs from prototype SARS-CoV-2 strain, which were connected end-to-end and co-assembled into a single molecular to possibly mimic the native trimeric arrangements in the natural spike protein. 22 This vaccine is in the liquid form of 0.5 ml per dose, containing 20 μg antigen and 0.3 mg aluminum hydroxide as adjuvant. The inactivated vaccine BBIBP-CorV, used as a control in this trial, was produced by Beijing Institute of Biological Products Co., Ltd. (BIBP). This vaccine has been approved by WHO for emergency use and applied in large populations. BBIBP-CorV was developed based on the 19nCoV-CDC-Tan-HB02 strain, which was passaged in Vero cells and inactivated by using β-propionolactone. 42 The vaccine was manufactured in a liquid formulation of 0.5 ml per dose, containing 6.5 U antigen. All vaccines were stored at 2°C-8°C prior to use. Procedures After screening, eligible participants received the booster inoculation intramuscularly with NVSI-06-07 or BBIBP-CorV, followed by clinical observation at the study site for no less than 30 min. Within the subsequent 7 days after booster vaccination, local and systemic adverse events (AEs) were self-reported daily by participants using standardized diary cards and verified by investigators. Solicited local AEs included pain, induration, swelling, rash, redness and pruritis. Solicited systemic AEs were fever, diarrhea, constipation, dysphagia, anorexia, vomiting, nausea, muscle pain (systemic), joint pain, headache, cough, breathing trouble, systemic pruritis (no skin damage), abnormal skin mucosa, acute allergic reaction, fatigue and dizziness. Other symptoms were collected as unsolicited AEs. From day 8 to day 30 post-vaccination, unsolicited AEs were recorded by participants in contact cards. Assessments were performed by study investigators to confirm subject safety. Serious adverse events (SAEs) and adverse events of special interest (AESIs) were monitored up to 6 months after vaccination. Safety oversight for specific vaccination pause rules and for advancement was done by an independent safety monitoring committee. The grade of AEs was assessed according to the relevant guidance of China National Medical Products Administration (NMPA). The causal relationship between adverse events and vaccination was determined by the investigators. Blood samples were collected from the participants before booster vaccination, and on days 14 and 28 after the boost. The immunogenicity was assessed by RBD-specific binding antibody responses (IgG) and neutralizing antibody activities against live SARS-CoV-2 virus. The corresponding seroconversion rates, defined as ≥4fold rise in IgG concentrations or neutralizing titers were determined based on the detected pre-booster and post-booster IgG or neutralizing antibody levels. In order to evaluate cross-neutralizing activities, besides prototype SARS-CoV-2 live virus, several VOCs, including Omicron, Alpha, Beta and Delta strains, were also tested in the neutralization assay for a subset of serum samples. Laboratory tests IgG level specific to prototype RBD was measured using a magnetic chemiluminescence enzyme immunoassay kit purchased from Bioscience (Chongqing) Biotechnology Co. (approved by the China National Medical Products Administration; approval numbers 20203400183). Serum samples were heat-inactivated at 56°C for 30 min, and then diluted to ensure the concentrations to be within the calibration range of the kit. The IgG concentration detections were performed on an automated chemiluminescence detector (Axceed 260) according to the manufacturer's detailed instructions. The reference calibrator used in the kit has been calibrated using the WHO International Standard for anti-SARS-CoV-2 immunoglobulin (NIBSC code: 20/136). Neutralizing antibody titer was detected using live-virus neutralization assay as described in our previous studies. 22 Briefly, heat-inactivated human serum samples were diluted by a two-fold dilution series starting from an initial factor of 1:4 (in detection of neutralizing antibodies against SARS-CoV-2 prototype strain) or 1:10 (in detection of neutralizing antibodies against VOCs). Serum dilutions were then mixed with the same volume of 100 50% tissue culture infectious dose (TCID 50 ) of SARS-CoV-2 live virus per well. After incubated at 37°C for 2 h, Vero cells with a density of 1.5-2 × 10 5 cells per mL were added into the well and subsequently incubated in a 5% CO 2 incubator at 37°C for 3-5 days. Both positive and negative reference serum controls were included in each assay. Neutralizing antibody titer was reported as the reciprocal of the highest serum dilution that protected 50% of cells from virus infection. The titer of the measurement below the lower limit of detection was assigned a value of half the detection limit. All the live-virus neutralization assays were carried out in the Biosafety Level 3 laboratory of National Institute for Viral Disease Control and Prevention, Chinese Center for Disease Control and Prevention (China CDC), Beijing, China. The live viruses of SARS-CoV-2 prototype (QD-01), Alpha (BJ-210122-14), Beta (GD84), Delta (GD96) and Omicron (NPRC2.192100003) strains were tested in the assays. Outcomes The primary outcome was the comparative assessment of immunogenicity between heterologous and homologous booster vaccinations on 14 and 18 days after the boost. The secondary outcomes were safety profile within 30 min, and 7 and 30 days of booster vaccination. The exploratory outcome was the immunity against Omicron and other VOCs. Safety was assessed by the occurrence of all SAEs and AESIs, and the occurrence of the solicited or unsolicited adverse reactions within 30 days after vaccination. The occurrence and severity of adverse reactions were compared between heterologous NVSI-06-07 booster groups and the homologous BBIBP-CorV booster groups. The immunogenicity was evaluated by geometric mean concentrations (GMCs) of RBD-binding antibody IgG and geometric mean titers (GMTs) of live-virus neutralizing antibodies, as well as the corresponding seroconversion rate, on 14 and 28 days after booster vaccination. The comparisons of the immunogenicity between the heterologous and homologous booster groups were also carried out. The immunity against Omicron and other VOCs evaluated was determined using live-virus neutralizing antibody GMTs. Statistical analysis Assuming that a 4-fold rise in neutralizing antibody titers for both heterogeneous and homologous booster groups reached at 85% and the non-inferiority threshold was set to -10%, the sample size was determined to be 208 using Miettinen & Nurminen method to achieve 80% power at one-sided significance level of 2.5%. 43 Assuming that the neutralizing antibody GMTs between heterologous and homologous boosting groups are comparable, with the standard deviation (SD) of GMT after log10 transformation to be 0.7, and the non-inferiority threshold was set to 2/3, 250 participants per group was needed to achieve 80% power at one-sided significance level of 2.5%. Considering the above estimations and 15%~20% drop-out rate, 600 participants were enrolled into each of the three boosting groups (1-3 months, 4-6 months and ≥6 months). Half participants of each group were assigned to heterologous booster and the other half were assigned to homologous booster. Thus, a total of 1800 individuals (900 in heterologous groups and 900 in homogeneous groups) participated the trial. For statistical analysis, the full analysis set (FAS), safety set (SS), per-protocol set 1 (PPS1) and PPS2 were defined. FAS included all participants who were randomly assigned to treatment and received the booster dose of vaccination. SS contained all participants who received the booster dose of vaccination. PPS1 and PPS2 included all participants who received the booster dose of vaccination and completed the follow-up visit on day 14 and 28 post-vaccination, respectively. Baseline characteristics were evaluated on FAS. Continuous variables were analyzed using Student's t-test and categorical variables were analyzed with Chisquare test. Safety analysis was performed on SS, and immunogenicity analysis was carried out on PPS. RBD-specific IgG levels and the live-virus neutralizing antibody activities were presented by GMCs and GMTs, respectively. Additionally, based on prebooster and post-booster values, 4-fold increase in antibody concentration or titer were calculated, with 95% confidence intervals (CIs) of seroconversion rate calculated using Clopper-Pearson method. 44 Cochran-Mantel-Haenszel (CMH) method considering stratification factors was used to compare the proportion differences between heterologous and homologous booster groups. 45 RBD-specific IgG concentrations and neutralizing antibody titers between the heterologous and homologous booster groups were compared after logarithmic conversion. For safety analysis, the number and proportion of participants reporting at least one adverse reaction postvaccination were analyzed and differences between groups were compared using Fisher's exact test (SAS Institute Inc. SAS/STAT ® User's Guide). All statistical analyses were carried out using SAS software (version 9.4). All statistical tests were two-sided, and the statistical significance level was P < 0.05.
2022-06-06T13:38:55.086Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "e494936604eca4de441a1517f3c7eb24b3f0440f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e494936604eca4de441a1517f3c7eb24b3f0440f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10393105
pes2o/s2orc
v3-fos-license
Prevalence of chronic kidney disease and associated factors among the Chinese population in Taian, China Background This study was designed to assess the prevalence of chronic kidney disease (CKD) and associated risk factors among the Chinese population in Taian, China. Methods A primary care-based cross-sectional study was conducted in Taian, China, from September to December 2012. Participants selected by a multi-stage stratified cluster sampling procedure were interviewed and tested for hematuria, albuminuria, estimated glomerular filtration rate (eGFR) and other clinical indices. Factors associated with CKD were analyzed by univariate and multivariate logistic regression analysis. Results A total of 14,399 subjects were enrolled in this study. The rates of hematuria, albuminuria and reduced eGFR were 4.20%, 5.25% and 1.89%, respectively. Approximately 9.49% (95% CI: 8.93%–10.85%) of the participants had at least one indicator of CKD, with an awareness of 1.4%. Univariate analyses showed that greater age, body mass index, and systolic and diastolic blood pressure; higher levels of serum creatinine, uric acid, fasting blood glucose, triglycerides, total cholesterol and low-density lipoprotein cholesterol; and lower eGFR were associated with CKD (p < 0.05 each). Multivariate analysis showed that age, female gender, educational level, smoking habits, systolic blood pressure, and history of diabetes mellitus, hyperlipidemia, hypercholesterolemia and hyperuricemia were independent risk factors for CKD. Conclusions The prevalence of CKD in the primary care population of Taian, China, is high, although awareness is quite low. Health education and policies to prevent CKD are urgently needed among this population. Background Chronic kidney disease (CKD), as defined and classified by the Kidney Disease Outcomes Quality Initiative (K/DOQI) of the National Kidney Foundation (NFK) [1], is one of the most important chronic diseases worldwide [2,3]. Up to 14% of adults in the United States aged > 18 years, representing an estimated 31.4 million people, were found to have some degree of CKD in 2007-2010 [4]. In Australia, CKD was common present in approximately 1 in 7 persons aged ≥ 25 years [5]. In addition to being common in developed nations, CKD is also highly prevalent in developing countries [6]. A cross-sectional study indicated that the nationwide prevalence of CKD in China was 10.8% (95% confidence interval [CI], 10.2-11.3), affecting an estimated 119.5 million (95% CI, 112.9-125.0 million) patients, similar to the level observed in the United States in 2003 [7,8]. For the major outcomes of CKD are progression to kidney failure, end-stage renal disease (ESRD), complications of decreased kidney function, and cardiovascular disease, CKD can greatly affect the general population, especially individuals at high-risk for hypertension or diabetes [9][10][11][12][13]. Furthermore, the rapid increase in the prevalence of risk factors, such as diabetes, hypertension, and obesity, has increased the burden of CKD, making CKD an important socioeconomic and public health problem [6,14]. CKD is considered to be a multi-factorial disease, with genetic and environmental factors contributing to its pathogenesis [15]. Many factors are associated with the prevalence of CKD including gender, occupation, education, marital status, diabetes, hypertension, hyperuricemia, history of kidney stones, and the use of traditional medicines [16,17]. To better prevent and control this disease, several studies in China have investigated the characteristics and potential risk factors of CKD [18][19][20]. However, little is known about the epidemiology of CKD in Taian, China. This study was therefore designed to evaluate the epidemiology of CKD among a primary care population in Taian, China, from September to December 2012. Sampling and subjects This cross-sectional survey was conducted between September and December 2012. A multi-stage stratified cluster sampling procedure was employed to select a representative sample of the primary care population in Taian, China. Briefly, two tertiary hospitals (Affiliated Hospital of Taishan University and Taian Central Hospital) and two secondary hospitals (Second Chinese Medicine Hospital in Taian and Taian Rongjun Hospital) were randomly chosen based on the population distribution in Taian. Eighty primary care units from each hospital, or a total of 320 units, were randomly selected; half of the units were located in the Taishan District and half in the Daiyue District. Subsequently, 50 subjects, aged ≥ 18 years and living in Taian > 6 months, were randomly enrolled from each of the 320 units. All participants provided written informed consent, and the study protocol was approved by the Ethics Committee of Affiliated Hospital of Taishan University. Questionnaire A structured questionnaire was designed for data collection. Factors analyzed included sociodemographic status (name, age, gender, ethnicity, educational level, marital status, financial situation and contact information), health-related behaviors (smoking, alcohol drinking, oil intake, diet, spending on food and type of insurance), awareness of CKD and personal and family history of relevant diseases (hypertension, diabetes mellitus, chronic kidney disease, hyperlipidemia and other diseases). All participants were personally interviewed by well-trained interviewers using uniform and standardized language. Anthropometric measurements Standard protocols and techniques were utilized by medical staff to measure anthropometric parameters, including weight, height, systolic blood pressure (SBP), and diastolic blood pressure (DBP). Body mass index (BMI) was calculated as weight (Kg)/height (m 2 ). Blood and urine sample collection Morning urine samples (non-menstrual period for women) were obtained after an overnight fast (at least 10 h). Urinary factors were measured using a Urine Analyzer (Urit Group, China) and urinary sediments were examined by light microscopy (Olympus Corporation, Japan). Venous blood samples were collected at the same time and used to measure various biomarkers, including fasting blood glucose (FBG), total cholesterol (TCH), triglyceride (TG), high density lipoprotein-cholesterol (HDL-C), low density lipoprotein-cholesterol (LDL-C), uric acid (UA) and serum creatinine (Scr) concentrations. All assays were performed by well-trained laboratory technicians using reagents from BioSino Bio-technology and Science Inc, China. Differences between males and females and between subjects with and without CKD were analyzed. Awareness of CKD was defined as subject knowledge of having CKD, based on a previous diagnosis by a physician. Hypertension was defined as SBP ≥ 140 mmHg and/or DBP ≥ 90 mmHg, any use of antihypertensive medication, or self-reported history of hypertension. Obesity was defined as BMI ≥ 28 kg/m 2 . Diabetes mellitus was defined as fasting blood glucose (FBG) ≥ 7.0 mmol/L and/or postprandial blood glucose (PBG) ≥ 11.1 mmol/L, or a history of diabetes. Hyperlipidemia (HLP) was defined as TG ≥ 1.70 mmol/L and/or TCG ≥ 5.72 mmol/L, or a history of HLP. Hyperuricemia was defined as UA ≥ 420 μmol/L for males and ≥ 360 μmol/L for females, or a history of hyperuricemia. Statistical analysis All statistical analyses were performed using SPSS 15.0 software. Categorical variables were presented as percentages and compared using Pearson chi-square tests, and continuous variables were reported as mean ± standard deviation (SD) and compared using unpaired t-tests or one-way analysis of variance. Multivariate logistic regression was performed to identify independent factors associated with PKD, including age, gender, smoking, educational level, obesity, history of nephrotoxic medications and hypertension, diabetes, cardiovascular disease, hyperlipidemia, hyperuricemia, and other diseases. Odds ratios (ORs) and corresponding 95% CIs were calculated. A p-value < 0.05 was considered statistically significant. Participant characteristics After excluding 1,601 subjects who lacked sufficient data, 14,399 (90.0%) participants were included in this analysis; of these, 59.46% were males and 40.54% were females. Baseline demographic, anthropometric and laboratory data are shown in Table 1. The mean age of all subjects was 48.97 ± 17.02 years (range, 18-89 years), while the mean ages of males and females were 49.64 ± 16.65 and 48.13 ± 18.01 years, respectively. Age, BMI, SBP, DBP, Scr, UA, FBG, TG, and eGFR were significantly higher in males (p < 0.05 each), whereas HDL-C and LDL-C were significantly higher in female (p < 0.05 each). TCH was similar in males and females (p > 0.05). Prevalence of CKD Of the 14,399 subjects, 1,366 (9.49%; 95% CI: 8.93%-10.85%), including 747 males and 619 females, were positive for CKD. The rates of hematuria, albuminuria and reduced eGFR (<60 mL/min/1.73 m 2 ) in subjects with CKD were 4.20%, 5.25% and 1.89%, respectively, although only 1.4% of these subjects were aware they had CKD. The prevalence of CKD was similar in males and females, while the rates of albuminuria and reduced eGFR were significantly higher in males (p < 0.01 each) and the rate of hematuria was significantly higher in females (p = 0.002). Figure 1 shows the age-specific rates of CKD, hematuria, albuminuria and reduced eGFR. Rates of CKD, albuminuria and reduced eGFR differed significantly by age (p < 0.01 each), with the rats of CKD and reduced eGFR increasing with age, especially among people aged > 60 years. In contrast, the rate of albuminuria was highest in subjects aged 40-49 years. Reduced eGFR was observed in 24.49% of subjects aged > 80 years. CKD Risk factors To characterize the risk factors for CKD, anthropometric measurements and biochemical indices were compared in subjects with and without CKD. Univariate analysis showed that age, BMI, SBP, DBP, Scr, UA, FBG, TG, TCH, and LDL-C were significantly higher, while eGFR was significantly lower, in subjects with than without CKD (p < 0.05 each) ( Table 2). Independent factors for CKD, as well as for hematuria, albuminuria and reduced eGFR, were analyzed by multivariate logistic regression analysis (Table 3). Factors independently associated with CKD included older age, female gender, smoking, high SBP and a history of diabetes mellitus, hyperlipidemia, hypercholesterolemia, hyperuricemia, and lower level of education. Older age, male gender, obesity, high SBP, hyperlipidemia, and smoking were independent risk factors for albuminuria; female gender and high SBP were risk factors for hematuria; and older age, female gender, and histories of cardiovascular disease, diabetes mellitus, hyperlipidemia, hypercholesterolemia, and hyperuricemia were independent risk factors for reduced eGFR. Discussion This is the first surveillance of CKD patients among subjects who visit selected hospitals in Taian City for primary care. This cross-sectional study evaluating the epidemiology of CKD and factors associated with it from September to December 2012. The key findings of this study were that 9.49% of the representative participants in Taian City had at least one indicator of CKD, whereas only 1.4% were aware that they had this disease. Factors associated with CKD, including diabetes mellitus, hypertension and educational level, were identified. Our finding, that the prevalence of CKD in Taian City was 9.49%, was consistent with the first national survey in China, which showed that the overall prevalence of CKD was 10.8% [7]. However, studies in other provinces and municipalities in China have reported higher rates of CKD, 10.1% to 13.5% in Beijing, Zhejiang and Guangdong, and 19.1% in Tibet [18][19][20]22]. Most of these regions are economically developed. The prevalence of diabetes mellitus has been found to increase substantially with economic development, from 5.8% in underdeveloped regions to 12.0% in developed regions [15]. A recent survey on the prevalence of CKD with type 2 diabetes (T2DM) in US adults found that eGFR was reduced or albuminuria present in 43.5% of the population, and that CKD was present in 61.0% of persons aged ≥ 65 years [25]. Our survey results indicated that diabetes mellitus was the strongest risk factor for CKD (OR = 2.06), which may explain the higher prevalence of CKD in Beijing, Zhejiang and Guangdong. Beijing is located in the north of China and its residents have a higher sodium intake in their diets [26,27]. Sodium intake has been associated with increased blood pressure [28]. Our survey indicated that the second strongest risk factor for CKD was high SBP (OR = 1.72), an indicator of hypertension and commonly found in subjects with progressive CKD. A retrospective study of 540 Chinese patients with CKD found that 39.6% had hypertension [29]. Unmeasured confounders associated with specific geographical regions may also contribute to variations in the prevalence of CKD. For example, hypoxia is common in populations living at high altitude, such as in Tibet. Laboratory studies have shown that, with the development of hyperuricemia, exposure of rats to intermittent hypobaric hypoxia could lead to renal microvascular and tubulointerstitial injury, suggesting that hypoxia itself may cause low-grade renal injury [30]. Although the prevalence of CKD in Taian was high, only 1.4% of individuals with CKD were aware that they had this disease. This awareness rate was much lower than the rates reported in Thailand (1.9%) [17], India (7.9%) [31], and Saudi Arabia (7.1%) [32], and in a national survey throughout China (12.5%) [7]. Our survey was performed on a primary care population, with most of these patients with CKD being asymptomatic or having few clinical symptoms, which may help to explain the low disease awareness. Tests to detect early stage CKD are available, but have not yet been used widely, and most individuals do not undergo regular urine testing. Lack of healthcare resources must be considered. The World Health Organization (WHO) has reported that, in developing nations, government spending on health care was only 0.4% to 4% of the gross national product, far below the 10% to 16% in developed countries [33]. Compared with those unaware of having CKD, participants who were aware of their condition were more educated [34]. Our results also showed that higher educational level may be protective in CKD patients (OR = 0.75). Because less educated individuals have fewer health information contacts, they often pay less attention to self-care advice and are less aware of the occurrence of chronic disease, which may delay treatment. Long-term reduction of CKD morbidity and mortality requires more attention to early detection and prevention at the population level, especially in less educated subjects [35,36]. This study used a multi-stage stratified cluster sampling procedure to obtain a representative sample of the primary care population in Taian City. All participating staff received intensive training before starting the survey and standardized protocols were used to ensure the quality of collected data. Additional advantages include the high response rate and the use of the MDRD equation modified for Chinese patients. All of these factors enhanced the credibility of the study results. However, this study had several limitations. First, only one blood and one urine sample were obtained from each participant, making it difficult to determine whether hematuria or albuminuria was transient or persistent. Second, this study was cross-sectional and not longitudinal, preventing determination of whether any risk factors were the cause or result of CKD. Finally, this study was based on a primary care population; thus, the correct sampling weights were not used for insufficient data, thus limiting the generalization of our results to the general population of Taian. Conclusions In conclusion, this is the first epidemiological survey of CKD in a large primary care population in Taian, China. The prevalence of CKD was found to be 9.49%, while the rate of awareness was much lower. Age, gender, educational level, smoking, SBP and history of diabetes mellitus, hyperlipidemia, hypercholesterolemia and hyperuricemia were independently associated with CKD. The prevalence of CKD may be reduced by controlling the increasing incidence of diabetes and hypertension in Taian. Health education and preventive policies for the general population are imperative. Rigorous designed studies with longitudinal data are required to confirm our results.
2016-05-04T20:20:58.661Z
2014-12-21T00:00:00.000
{ "year": 2014, "sha1": "bd4d3583642401498f70d1aa1fdd5a86b1af7b8a", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/1471-2369-15-205", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d08871a332302b3b3e6d2898794d34c98753bbe1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257384421
pes2o/s2orc
v3-fos-license
Handwriting Evaluation in School-Aged Children With Developmental Coordination Disorder: A Literature Review Despite widespread computer use, legible handwriting remains an important common life skill that requires more attention from schools and health professionals. Importantly, instructors and parents typically attribute the difficulties to laziness or a lack of effort, causing the youngster anger and disappointment. Handwriting issues are a public health concern in terms of both prevalence and consequences. Writing is a tough and diverse activity that requires cognitive, perceptual-motor, mental, and emotional talents. It is largely a motor process involving an effective level of motor organization that results in exact movement synchronization. Handwriting problems have been connected to developmental disorders such as developmental coordination disorder. For the affected youngsters, forming letters takes more work, and the kid may forget what he or she planned to write. School children’s primary handwriting issues include illegible writing, slow handwriting, and strained writing. Handwriting problems may lead to scholastic underachievement and low self-esteem. Because of this complication, some school-aged children develop handwriting difficulties, which cause psychological distress and learning impairments. In the treatment of children with bad handwriting, the therapeutic intervention has been demonstrated to be successful. We aimed to determine how efficient tools and scales are which assess handwriting in school-aged children having developmental coordination disorder. Keyword searches were conducted on Google Scholar and PubMed, yielding 45 results, eight of which met the inclusion requirements. We concluded that there are a lot of scales and tools to date but no scale focuses on the temporal and spatial parameters for handwriting evaluation. Introduction And Background A neurodevelopmental illness that is manifested by a significant lack of motor coordination, which creates an interface with academic competency, daily living chores, and recreational engagement, is known as developmental coordination disorder (DCD) [1,2]. The latest edition of the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) published by the American Psychiatric Association states that a child with DCD has motor coordination that is below the norm for his or her chronological age, may have been labelled as "clumsy," and may have experienced delays in early motor milestones such as walking and crawling. Academic performance or daily living tasks are hampered by coordination issues with either gross or fine motor movements, or both. A medical illness or sickness has nothing to do with coordination issues (e.g., cerebral palsy, muscular dystrophy, visual impairment, or intellectual disability). If intellectual disability is present, the child's motor challenges go beyond what is predicted based on intelligence quotient (IQ) [3,4]. For freshers, in DCD research, there is a structural organization of behavioral and brain-based experimental information that can assist theorists to build connections between the hierarchy of explanation-brain, cognition, and motor action [5]. Based on this body of research, a (hybrid) multicomponent ecological systems theory model of performance and advancements in cognitive neuroscience will be developed. The three essential components of the paradigm are based on systems theory, and motor performance is judged by the interplay of person, task, and environmental constraints [6]. Individually, there is an interaction set of restrictions that skew our reaction capacities at every particular stage of development [7]. At the most fundamental biological level, genetic variables initiate maturational processes that shape physical system architecture such as brain networks, neuromuscular systems, and biomechanical linkage [8]. However, contextual circumstances are required to activate certain genotypic pictures, and phenotypic pictures also represent the outcome of "nature via nurture." These fundamental structures serve as the foundation for a wide range of internal activities, including cognition (e.g., executive functions), motor control processes (e.g., internal modeling), and motor learning (e.g., procedural learning), all of which can be influenced by physical activity over time. As a result, these processes and structures restrict an individual's (latent) mobility alternatives. External task limitations, such as the objectives, norms, and equipment connected with a certain activity, are outside of the body and unique to the work at hand [9]. Occupational therapy, physiotherapy, medication (e.g., methylphenidate), food (e.g., fatty acids combined with vitamin E supplementation), and education (teachers, parents, physical education) are all used as therapeutic interventions. Process-oriented methods concentrate on the components or bodily processes required to execute activities. Bottom-up techniques include sensory integration, kinesthetic training, perceptual training, and combinations. The idea underpinning DCD is that uplifting bodily functions including sensory integration, kinesthesia, muscle strength, core stability, visual-motor perception, and so on, leads to an enhanced skill execution [10]. Hand-eye coordination, combining a visually perceived item into physical output, graphic abilities, and even handwriting are all examples of fine motor skills. Weak fine motor control, a lack of muscle contraction coordination, and variations in impact rate and strength can all contribute significantly to obscured and incoherent handwriting; as a result, assessing fine motor control in handwriting movement is critical in the overall assessment of handwriting deficits [11]. Graph motility, whether drawing or writing, in the classroom, is indeed a vital fine motor skill. Handwriting of children with DCD has grown less legible and organized. Higher slowdown and acceleration peaks are analyzed by looping or scribbling processes [12]. Additionally, while duplicating literature, children with DCD write fewer letters than children without DCD. In comparison, they write faster and spend more time writing while holding their pen in place than kids without DCD. Children with DCD pause more frequently than their classmates without DCD, not less frequently. Three main theories have been put up to explain why DCD children's handwriting impairments exist, despite the fact that a variety of internal and environmental factors may contribute to the onset of dysgraphia. While the second hypothesis contends that people struggle to coordinate their motor efficiency, the first explanation contends that people struggle with muscular stiffness and the activation of muscular force. According to the third theory, developing one's motor abilities needs time. In other words, students struggle to convert from feedback to feedforward handwriting control, which compromises the consistency of a single motor pattern [12]. This study emphasizes extracting and exploring whether there are any scales and tools that assess the temporal and spatial parameters for the evaluation of handwriting. Etiology and risk factors Even though DCD is categorized as a continuous disorder, unlike Down syndrome, it does not usually have a single discrete emergence (which signifies a single gene mutation), and hence its border with other ongoing illnesses has been called into doubt. One of the major confounding variables of DCD is thought to be attention deficit hyperactivity disorder (ADHD), with a prevalence ranging from 35% to 50% of patients [13,14]. Another rather common comorbidity, like dyslexia, is specific language impairment, which has an estimated frequency of 32% in the DCD population [15]. Other co-occurring conditions include ocular anomalies (refractive errors, amblyopia, and strabismus), which have been linked to abnormalities in children with DCD's eye-hand coordination, ability to use visual cues, particularly to guide limb movements, and visual memory [16], hypermobility syndrome of the small joints, which has been linked to difficulties in children's handwriting tasks [17], and migraines without aura, which manifest with impairment in cognitive functions [18]. Preterm birth, whether defined as short gestational age at delivery or low birth weight, is the single risk factor consistently associated with DCD [19]. This is crucial because children with DCD who are in school tend to retreat from physical and social activities. Furthermore, children with DCD lose physical fitness over time and are more vulnerable to adopting a sedentary-related impairment such as cardiovascular compromises or malfunctioning and obesity [20]. Children and adolescents with poor motor skills, sometimes referred to as "clumsy," constitute a hidden minority who are at a risk of withdrawing from or being excluded from physical activity. Given the interdependence of activity, cardiorespiratory fitness, body fat, and coronary vascular disease, the discovery that motor incoordination reduces physical activity levels is relevant [21]. Pathophysiology Children with DCD had a physiological connection between the sensorimotor network and the posterior cingulate cortex, precuneus, and posterior middle temporal gyrus that was disrupted, according to wholebrain resting-state imaging. This prevented them from using execution knowledge to its fullest potential and most likely hampered the acquisition of motor information [22]. Numerous research studies have advanced the idea that the basal ganglia, parietal lobe, and cerebellum may play a role in its development because this is a significant motor function and visuospatial deficiency. According to neuropsychological research, poor visuomotor cognition, low nonverbal abilities, and execution dysfunction impaired frontal problem-solving and praxis skills at the ideomotor, conceptual, visuoconstructive, and speech levels. There were also persistent emotional, behavioral, and social difficulties. The patient's cognitive and affective symptoms strongly suggest that DCD is linked to the "cerebellar cognitive affective syndrome" (CCAS), which includes affective dysregulation, as well as executive, visuospatial, and linguistic impairments, and can accompany both acquired and developmental cerebellar disorders. Single-photon emission computed tomography (SPECT) functional neuroimaging revealed significant perfusion anomalies in supratentorial regions, which support skillful motor act execution (prefrontal lobe), behavioral and emotional control (prefrontal lobe), and visual information processing in individuals (occipital lobe). This study confirms earlier neuroanatomical findings that the cerebellum and the prefrontal, temporal, posterior parietal, and limbic cortices have strong neural connections. The lateral prefrontal cortex (PFC) communicates to the cerebellum through the dentate nucleus and thalamus, whereas the PFC connects to the cerebellum via pontine nuclei. Therefore, as a result, anatomoclinical configurations in patients appear to reveal that like CCAS, DCD may be caused by a disruption in the close functional complex interaction between the cerebellum and supratentorial brain regions required for such implementation of an organized motor function, affective regulation, and visuomotor processing. The dispersed cerebrocerebellar network, which supports movement, cognition, and emotion, may not have fully developed or be underdeveloped, which could explain the motor, cognitive, and emotional symptoms of DCD [23]. Features Children with DCD struggle to tie their shoelaces, button their shirt buttons, open and close zippers, brush their teeth, and use dishes and utensils [11,24]. Since children with DCD frequently struggle with fine motor skills, especially handwriting, their schoolwork frequently does not represent their personal and professional growth [25]. There is, however, also proof of a more general academic impairment encompassing reading, working memory, and arithmetic abilities [26]. Although the disorder is initially diagnosed based on motor difficulties, it may progress to complicated psychosocial issues with challenges in peer relationships and social involvement [27], bullying [28], low self-esteem and a sense of competence [29], and psychological disorders that are internalized, such as anxiety and low mood [27]. In addition to secondary psychosocial effects, individuals with DCD are more likely to exhibit other developmental characteristics such as hyperactivity, difficulty interacting with others, and specific learning problems, particularly dyslexia [30]. Incidence and prevalence DCD influences the child's health and well-being, as well as involvement in everyday life, with a subsequent impact on the family. It affects 1.8% to 4.8% of children, with a boy-to-girl ratio of 1.9, according to some studies, but others believe this is a conservative estimate [31]. DCD is one of the most prevalent disorders afflicting school-aged children, accounting for 5-6% of all cases [1,24,31,32]. Search Strategy An open-date search method was used to search the literature. To find papers on DCD, we used the search phrases "developmental coordination disorder," "fine motor," and "hand movement." To explore papers on handwriting evaluation in DCD, we used the phrases "developmental coordination disorder," "handwriting," "tools," "scales," "evaluation," "parameters," and other "fine motor skills." This method was used on PubMed and Google Scholar. The search approach was revised. Three reviewers examined the titles and abstracts acquired through search techniques (P.K., M.I.Q., R.K.K.). The full-text review was likewise performed by the same reviewers. One reviewer checked the reference list of research chosen for the review to find additional papers. Inclusion/exclusion criteria for studies Observational studies were among the requirements for eligibility. All English language full-text studies on humans were included in the study of school-aged participants aged 7 to 16. A scale or method for assessing handwriting in children with developmental coordination impairment must have been included in the research intervention. Between 2009 and 2022, the investigations were published in peer-reviewed journals. Letters to the editor, conference abstracts or inadequate data, animal research, and a lack of original data were all exclusion factors. There were no limitations to the setting. Studies conducted before 2008 were not included. Study Characteristics A database search resulted in a total of 45 abstracts to examine. Following title/abstract screening, 10 papers were chosen for full-text examination. Following the full-text screening, five observational studies were included in our evaluation (Figure 1). (5) FreeWriting for 10 minutes, as well as an activity that didn't include language and entailed drawing crossing lines inside of concentric ring. The DASH Free Writing assignment is completed in 10 minutes, with the main raw score being the mean number of words per minute determined throughout the whole 10-minute time. The DASH focuses on handwriting speed rather than particular qualities of handwriting quality or letter formation. The evaluation has one limitation: the five tasks included in the DASH were deemed appropriate for children aged 9 years and above. The cognitive demands of the DASH activities may be disproportionate for most children younger than this age, rendering the findings difficult to evaluate. All five DASH activities are appropriate for kids up to the age of 16 years and are sensitive to developmental changes across the age range [33]. A research was conducted in 2012 in which they designed a brief, effective handwriting screening instrument to meet the demand. The SOS (Systematische Opsporing van Schrijfmotorische problemen or "Systematic Screening of Handwriting Difficulties") is patterned on the brave handwriting kinder (BHK), but it may take less time to complete. This lets you check a child's written text and afterward grade the overall BHK if more detailed information is needed to formulate an intervention plan. The six most discriminating factors explained 65% of the variation in pilot research (n = 128) and were therefore chosen from the 13 BHK criteria. To create the SOS test, they were rearranged and the scoring was streamlined. A total of 603 children aged 7 to 12 years old with an IQ of at least 70 but developmental issues were chosen from regular and special education schools. The writing speed of a child was measured by counting the written characters after he or she was asked to replicate a paragraph in 5 minutes. Seven items with scores ranging from 0 to 2 were used to assess writing quality. A total raw score was derived by summing these seven scores. On a transparent sheet supplied with the handbook, the components measuring letter height, regularity of letter height, and sentence perpendicular to the axis were measured. The study found that inter-rater and intrarater reliability were excellent, while test-retest reliability was low. This test can be used to detect handwriting issues early on. This tool may help accomplish the objective of prompt intervention for children, preventing secondary issues such as scholastic underachievement and low self-esteem that are frequently connected with handwriting difficulties. The child's capacity for sustained legible writing is not evaluated by the SOS. As a result, it is unknown if certain children's legibility might diminish if they had to write for more than 5 minutes. The child's writing pace varies according to the context, instruction, and whether he or she is copying, taking dictation, or free writing [34]. In 2015, researchers conducted a study to convert the previously recognized adults' Handwriting Proficiency Screening Questionnaire (HPSQ) into a children's self-report version (HPSQ-C) and assess its reliability and validity. A total of 230 Israeli youngsters aged 7 to 14 years from normal schools participated. The questionnaire's contemporaneous and construct validity, internal consistency, and content validity were all assessed. There are three domains and 10 items in this questionnaire: performance time (items 3, 4, and 9), physical and emotional well-being (items 1, 2, and 10), and legibility (items 5-8). Each item is graded from 0 (never) to 4 (always); greater scores indicate poor performance. The HPSQ-C was determined to be adequate for assessing handwriting insufficiency in school-aged children, as well as for a range of academic and therapeutic tasks in this study. The HPSQ-C is most commonly used by occupational therapists [35]. In 2018, 452 healthy children aged 8 to 10 years were part of an observational study to assess several forms of Persian Handwriting Assessment Tool (PHAT) validity and reliability. Students were chosen using a random cluster sampling method. PHAT was created to evaluate handwriting legibility and speed. On a lined sheet of paper, the student should copy and describe 12 words for PHAT. Five readability taken into consideration by PHAT -separation (the distance between words and letters), breadth (appropriateness of word size), placement (word angle on the line), skewed (cumulative text angle on the line), and alphabet formation (correct ascending, descending, and rounding of letters) -are important factors in copying and dictation. This tool may be used independently and takes around 10 minutes to finish. The percentage of letters written every minute was computed as the number of letters divided by the number of seconds. Speed and orthographic mistakes were graded using a scale. The constituents of legibility (word structure, spacing, alignment, and text slant) were graded on a 5-point Likert scale (extremely poor to very good), with 5 being the highest score. The size was graded on a scale of 1 to 3, with 3 being the highest possible score. Finally, a participant's score in both the copy and dictation domains was calculated using a mean of 12 words. The PHAT was discovered to be a viable and reliable way of evaluating handwriting in primary school-aged children. Students who are fluent in Persian, on the other hand, can profit from this application [36]. Fogel et al. explored handwriting readability across a range of writing activities, as well as the characteristics that predict overall handwriting legibility. The participants' age in the research ranged from 9 to 14 years. Texts from the DASH free-writing exercise were combined to create the Handwriting Legibility Scale (HLS). Its goal was to evaluate five aspects of legibility: overall legibility (first-reading readability), total reading effort, page layout, letter formation, and writing modifications (attempts to rectify letters and words). Every component is assigned a score ranging from 1 (great performance) to 5, with a total score of legibility ranging from 5 to 25. Higher ratings imply that the material is difficult to read. After testing handwriting readability across multiple handwriting exercises, they concluded that HLS is a useful tool for occupational therapists who work in schools [37]. Conclusions In order to enable early intervention, which may prevent the secondary sequelae frequently associated with DCD, valid and reliable tools and scales for early identification of DCD are crucial. Researchers are arguing for an evidence-based, multi-professional, well-rounded assessment that includes motor assessments, parent/child questionnaires, and in-depth assessments of the impact on activities of daily living and positively contributes to family functioning and overall well-being. There appears to be a global interest in assessing younger populations. For evaluating handwriting in children with DCD, numerous scales and instruments are available. All of the aforementioned scales fall short when it comes to addressing the spatial and temporal aspects of handwriting. The scales are constrained and technologically advanced, which is not practical in the Indian setting, whether or not they are economically feasible. As a result, we believe that there is a considerable demand for a tool that analyses the spatial and temporal features of handwriting. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-03-08T16:10:03.499Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "97d3408c030d51c40cd151f8dca0c99fb96d9ef5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7759/cureus.35817", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9fe7748f6534dfaa90210f3895df658fa481b6ba", "s2fieldsofstudy": [ "Education", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
138063005
pes2o/s2orc
v3-fos-license
Hexvix blue light fluorescence cystoscopy--a promising approach in diagnosis of superficial bladder tumors. INTRODUCTION Nowadays, Hexvix blue light cystoscopy (BLC) represents an increasingly acknowledged diagnostic method for patients with bladder cancer. The aim of our study was to establish the place of this procedure in superficial bladder tumors diagnosis and to compare it with standard white light cystoscopy (WLC). MATERIAL AND METHODS Between December 2007 and January 2008, WLC and BLC were performed in 20 cases. Transurethral bladder resection (TURB) was performed for all apparent detected lesions. The patients diagnosed with superficial bladder tumors have been followed-up after 3 months by WLC and BLC. The control group included the same number of consecutive patients with superficial bladder tumors, diagnosed only by WLC, which underwent the same treatment and follow-up protocol as the study group. RESULTS WLC identified 30 suspicious lesions (28 pathologically confirmed), while BLC identified 41 apparent tumors (39 pathologically confirmed). So, from the total number of 40 tumors with positive histology, WLC correctly diagnosed 70% of them, with a rate of 6.7% false-positive results, while BLC diagnosed 97.5%, however presenting a 4.9% rate of false-positive results. 17 cases of the study group diagnosed with superficial bladder tumors were followed. The tumor recurrence rate after 3 months was 5.9% for the study group and 23.5% for the control group. CONCLUSIONS Hexvix fluorescence cystoscopy is a valuable diagnostic method, with considerably better results by comparison to WLC. The improved diagnostic may have a significant impact upon the recurrence rate. Introduction Bladder cancer represents a common malignancy, with a severely high recurrence rate.Despite the fact that WLC is still regarded as the gold-standard diagnostic method for superficial bladder tumors [1], small papillary (Ta, T1) or flat (CIS) urothelial lesions can be easily overlooked, thus leading to a significant increase of the recurrence rate [2]. So, while searching for a more sensitive diagnostic tool, photodynamic diagnosis (PDD) emerged as a promising solution. The aminolevulinic acid (ALA) was among the first products used for PDD, but only after the introduction of its improved ester, hexyl aminolevulinate (HAL -Hexvix®) did the method acquire a substantial acknowledgement in practical use. The EAU Guidelines state that fluorescence cystoscopy, performed in blue-light and using a porphyrin-based photo sensitizer, may help discovering suspicious areas, which can hardly be detected in white-light [1].Consequently, TURB under fluorescence guidance seems to reduce the risk of tumor recurrence [3]. In our country, Hexvix fluorescence cystoscopy was performed as a national premiere in the Department of Urology of "Saint John" Emergency Clinical Hospital in December 2007. Materials and Methods: Between December 2007 and January 2008, 20 consecutive cases, 14 men and 6 women, with a mean age of 66 years (range 36 to 78) with hematuria and positive urinary cytology have been investigated in our Clinical Department. A standard investigative protocol was applied in all cases and included: general clinical examination, blood tests, urine culture, abdominal ultrasonography and IVP.No imagistic evidence of upper urinary tract disease has been found. Hexvix bladder instillation (100mg dissolved in 50mL phosphate buffer solution, 8mmol) was performed using a 14 Ch bladder catheter, after complete voiding, at least one hour prior to cystoscopy.The catheter was removed after instillation (except in 1 case with urinary incontinence, when it was simply clamped).Patients were instructed not to void and to repeatedly change position in order to insure the contact of the entire bladder urothelium with hexaminolevulinate. The necessary equipment for BLC consisted of: • a high-power endoscopic light source with integrated excitation filter (wavelength 380-450 nm), which passes primarily blue light (D-light-C Storz system); • a special light cord; • a high sensitivity version of the endoscopic camera, displaying a special "fluorescence mode"; • a foot pedal which allows convenient switching between white and blue light; • a standard color monitor; • a Storz rigid cystoscope with an integrated filter in the eyepiece.All procedures have been performed under spinal anesthesia.The first step of the endoscopic procedure was represented by repeated washing of the bladder, aiming to evacuate the Hexvix solution, consequently avoiding the excessive fluorescence of the bladder content.Thus, the contrast required for small lesions' detection was significantly improved.The second step included careful WLC, resulting in a bladder map of the suspicious lesions.Standard cystoscopy was followed by BLC, and the distinctively fluorescent lesions were also mapped (Fig. 1). Figure 1: Hexvix induced fluorescence of bladder urothelial tumors in blue light A comparison of some uncertain lesions on the two bladder charts was obtained by repeated switching from white to blue light, and backwards. TURB was performed for all the suspected lesions established by the two types of cystoscopy.The histological analysis emphasized a "pathological" bladder chart for each patient.A comparison between the three bladder maps was performed, in order to establish the accuracy of each diagnostic method. The patients diagnosed with superficial bladder tumors have been followed-up after 3 months by WLC and BLC.The control group included the same number of consecutive patients with superficial bladder tumors, diagnosed only by WLC, which underwent the same treatment and follow-up protocol as the study group. Results The cystoscopy and pathological results described 4 different groups of patients.Group I included cases with identical bladder maps according to WLC and BLC, and entirely confirmed by the pathological examination. This group included 10 cases (50%) in which 18 tumoral lesions were identified by both diagnostic tools (1 CIS, 9 pTa, 7 pT1 and 1 pT2).There were no false-positive lesions among these cases, according to both methods. Group II included the cases diagnosed with bladder cancer by WLC and in which BLC showed at least one supplementary bladder tumor. Group IV included one case (5%), in which WLC showed two apparently flat lesions, with no fluorescence in BLC and no pathological confirmation (Fig. 6). The sensitivity of WLC was 70%, 28 of the actual 40 tumors being correctly diagnosed (1 CIS, 13 pTa, 12 pT1 and 2 pT2).The false positive rate of this method was 6.7% (2 out of 30 resected suspicious lesions turned out to be benign). Forty-one suspicious tumoral lesions were identified during BLC, of which 39 were confirmed by the pathology exam (7 CIS, 18 pTa, 12 pT1, 2 pT2).The false positive results were represented by 2 CIS suspicious lesions (4.9%).One pTa tumor described by WLC was not diagnosed by BLC.Therefore, this diagnostic method described a sensitivity of 97.5% (39 out of the 40 pathologically confirmed tumors have also been emphasized in blue light). There were no complications related to Hexvix-BLC.Regarding the follow-up, both patients with pT2 tumors (which underwent radical cystectomy) and the one with no pathologically confirmed malignancy were excluded from the study group.Among the remaining 17 patients, after re-TURB, 1 case of recurrence was identified, with a pathologically confirmed CIS lesion.In the control group were diagnosed 4 cases of recurrence (5 tumors, of which 2 pTa and 3 CIS).So, the tumor recurrence rate was 5.9% for the study group, and 23.5% for the control group. Commentaries BLC is a diagnostic method which emerged from the constant need to improve the efficacy of standard WLC. ALA was the first topical agent used for PDD.Nowadays, it has been replaced by a more potent lipophilic derivate, HAL-Hexvix, an improved ester of the aminolevulinic acid, which provides the benefits of increased selectivity, brighter fluorescence and shorter instillation time [5,6]. The basis of Hexvix cystoscopy is represented by the increased preferential accumulation of the photoactive porphyrin in the neoplastic tissue, resulting in red fluorescence emitting tumors (including some of the previously undetected ones) [7]. The accuracy of BLC depends on a number of important practical aspects concerning the cystoscopy technique and the specific issues related to it.One of the most important goals is to improve the specificity of the method by reducing the number of unnecessary biopsies.There are some distinctive causes related to false-positive fluorescence One of them is represented by the usually fluorescent appearance of the prostatic urethra, bladder neck and the ureteral orifices (Fig. 7), unrelated to malignancy.Also, tangential view in blue light may create a false fluorescence of the normal urothelium (Fig. 8).In order to avoid this problem the bladder must be fully distended (thus eliminating the presence of the mucosal folds) and the lesions must be directly illuminated.The false fluorescence may be emphasized by pressing the concerned area with the resection loop.Such lesions in which fluorescence fades in this manner are usually benign and therefore resection is unnecessary. Bladder inflammation may determine increased fluorescence of the mucosa and consequently, false-positive results [8].That is why BLC should not be performed any sooner than 3 months after BCG (Bacillus Calmette-Guerin) or chemotherapy intravesical instillations. The false positive results are much related to the aggressiveness of certain urologists while performing bladder biopsies, due to their tendency to resect any remotely suspicious areas.A review of the literature emphasized a rate of unnecessary biopsies of 13 to 39% [5,8]. The margins of a fluorescent malignant lesion have to appear quite sharp and well separated from the surrounding regions (Fig. 9).Blue light is highly absorbed by blood and clots; therefore, resection related bleeding may result in poor visualization and diagnostic accuracy.TURB is bound to take place only after completing the 2 types of cystoscopy [8]. The exclusion criteria for patients taken into consideration for Hexvix BLC are represented by allergy to HAL or related compounds, pregnancy, lactation, intravesical instillations. In a multicentric study which included 298 patients, Fradet and Grossman compared Hexvix -BLC and WLC regarding the detection of CIS.In 58 cases with 113 CIS lesions, BLC detected 104 (92%), while WLC only established the presence of 77 (68%).Therefore, the authors concluded that Hexvix -BLC is able to diagnose CIS lesions that may be missed by WLC [7]. Frampton and Plosker mention two European, multicentric, phase III trials, which analyzed HAL cystoscopy as an adjunct procedure to standard cystoscopy in patients with known or suspected bladder cancer.According to one trial, HAL cystoscopy detected 96% of the patients with CIS (28% more patients with CIS than standard cystoscopy).In the second trial, 17% of patients were selected to receive a more complete treatment, due to the improved tumor detection rate following HAL cystoscopy [9]. Grossman evaluated 311 patients in another multicenter study, HAL fluorescence cystoscopy being compared with WLC concerning the detection of Ta and T1 papillary lesions.In 29% of the cases, Hexvix -cystoscopy detected at least 1 more Ta and T1 papillary tumor than WLC.Detection rates for HAL and standard cystoscopy were 95% vs. 83% for Ta tumors, respectively 95% vs. 86% for T1 tumors [8]. Loidl and Schmidbauer accomplished a prospective controlled study, in which they analyzed a within-patient parallel between flexible HAL cystoscopy and standard flexible cystoscopy, HAL rigid and standard white light rigid cystoscopy.HAL fluorescence flexible cystoscopy compared to HAL rigid cystoscopy showed almost equivalent results in detecting papillary and flat lesions in bladder cancer patients, both procedures being superior to standard white light flexible cystoscopy [2]. Most of the BLC side-effects are related to the endoscopic procedure rather than the HAL instillation, with no significant differences from WLC (dysuria, hematuria, bladder spasm, and bladder pain) [4]. Due to the fact that 5-ALA and HAL do not penetrate much deeper than 1mm, fluorescence cannot indicate the invasion depth.So, a second TURB should be performed in T1 tumors, in order to rule out muscle invasion [4]. It has been shown that recurrence at any site in the bladder at the first follow-up cystoscopy after TUR is one of the most important prognostic factors for time to progression.Therefore, PDD might be most useful for patients with primary tumors [4]. It is quite obvious that tumors shortly diagnosed after TURB are mostly lesions overlooked during the primary procedure rather then newly developed malignancies.This is the rationale for trying to improve the short term recurrence rate and the prognosis by superior diagnostic accuracy. Denziger and Burger performed a comparative analysis on 191 cases with superficial bladder tumors diagnosed either by WLC or BLC.The recurrence rates among these patients were 44% in the WLC arm versus 16% in the BLC arm.Moreover, the residual tumors rate at 6 weeks after the primary TURB was 25.2% and respectively 4.5% [10]. Similar results have been established by Daniltcenko and Riedl, describing recurrence rates of 16% in cases investigated by BLC and 41% in patients diagnosed by standard protocol.The differences between the recurrence rates tend to decrease in time.At 2, 12, 36 and 60 months from the initial TURB the rates were 41%, 61%, 73% and 75% in the WLC arm versus 16%, 43%, 59% and 59% in the BLC arm [11]. In the literature data, results concerning secondary diagnosed lesions are rather contradictory.Filbek and Pichlmeir discovered no heterotopic residual tumors during the second look TURB.On the other hand, Riedl and Daniltcenko emphasized an almost similar proportion of orthotopic and heterotopic residual malignancies [12,13]. Conclusions Hexaminolevulinate (HAL-Hexvix) fluorescence cystoscopy proves to be a powerful diagnostic method in superficial bladder cancer, with more effective imaging, higher detection rates and improved sensitivity by comparison to WLC. Patients with Ta, T1 and especially CIS are the main beneficiaries of this technique, as it provides them with better chances for a complete TURB. The improved accuracy of BLC leads to a significantly reduced recurrence rate, especially during the short and medium-term follow-up. The impact of Hexvix-BLC upon the prognostic and the survival rates in patients with superficial bladder tumors should represent the main objective for future large and multicenter studies. Figure 2 :Figure 3 : Figure 2: Comparative aspect in WLC (left) and BLC (right) of two pTa urothelial bladder tumors Figure 4 :Figure 5 :Figure 6 : Figure 4: CIS not visible in white light but with specific fluorescence in blue light Figure 7 : Figure 7: Normal fluorescence of the prostatic urethra (left) and bladder neck (right) Figure 8 : Figure 8: False positive fluorescence of the normal urothelium in tangential view Figure 9 : Figure 9: Sharp margins of a malignant lesion (left) by comparison to diffuse aspect of a false-positive fluorescence secondary to inflammation (right)
2018-04-03T03:14:33.912Z
2008-07-01T00:00:00.000
{ "year": 2008, "sha1": "28499c0a974eb0702fbd76b40cdcb5a0c8d26fca", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "647b9e7f27f694025f3678a7d854151c38d0e5c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55416448
pes2o/s2orc
v3-fos-license
Influence of Immersion Lithography on Wafer Edge Defectivity In semiconductor manufacturing, the control of defects at the edge of the wafer is a key factor to keep the number of yielding die as high as possible per wafer. Using dry lithography, this control is typically done by an edge bead removal (EBR) process, which is understood well. The recent production introduction of immersion lithography however changes this situation significantly. During immersion exposure, the wafer edge is locally in contact with water from the immersion hood, and particles can then be transported back and forth from the wafer edge area to the scanner wafer stage. Material in the EBR region can also potentially be damaged by the dynamic force of the immersion hood movement. In this chapter, we have investigated the impact of immersion lithography on wafer edge defectivity. In the past, such work has been limited to the inspection of the flat top part of the wafer edge, due to the inspection challenges at the curved wafer edge and lack of a comprehensive defect inspection solution. The development of edge inspection & metrology tools now allows us to probe this area of the wafer. This study used KLA-Tencor's VisEdge CV300-R, an automated edge inspection system that provides full wafer edge inspection (top, side, and bottom) using laser-illumination and multi-sensor detection, and where defects can be classified with Automated Defect Classification (ADC) software. In addition to defectivity capture, the tool performs simultaneous, multi-layer, film edge and EBR metrology, indicating the distance from the wafer edge or wafer top depending on the location of the film/EBR edge. It also can review defects using an integrated, hi-resolution microscope. In this paper the impact from immersion lithography towards wafer edge defectivity is investigated. The work revealed several key factors related to wafer edge related defectivity control: choice of resist, optimization of EBR recipes, scanner and immersion-fluid contamination, wafer handling and device processing procedures. Understanding the mechanisms of wafer edge related immersion defects is believed to be critical to the successful integration of the immersion process into semiconductor manufacturing. Introduction In semiconductor manufacturing, the control of defects at the edge of the wafer is a key factor to keep the number of yielding die as high as possible per wafer. Using dry lithography, this control is typically done by an edge bead removal (EBR) process, which is understood well. The recent production introduction of immersion lithography however changes this situation significantly. During immersion exposure, the wafer edge is locally in contact with water from the immersion hood, and particles can then be transported back and forth from the wafer edge area to the scanner wafer stage. Material in the EBR region can also potentially be damaged by the dynamic force of the immersion hood movement. In this chapter, we have investigated the impact of immersion lithography on wafer edge defectivity. In the past, such work has been limited to the inspection of the flat top part of the wafer edge, due to the inspection challenges at the curved wafer edge and lack of a comprehensive defect inspection solution. The development of edge inspection & metrology tools now allows us to probe this area of the wafer. This study used KLA-Tencor's VisEdge CV300-R, an automated edge inspection system that provides full wafer edge inspection (top, side, and bottom) using laser-illumination and multi-sensor detection, and where defects can be classified with Automated Defect Classification (ADC) software. In addition to defectivity capture, the tool performs simultaneous, multi-layer, film edge and EBR metrology, indicating the distance from the wafer edge or wafer top depending on the location of the film/EBR edge. It also can review defects using an integrated, hi-resolution microscope. In this paper the impact from immersion lithography towards wafer edge defectivity is investigated. The work revealed several key factors related to wafer edge related defectivity control: choice of resist, optimization of EBR recipes, scanner and immersion-fluid contamination, wafer handling and device processing procedures. Understanding the mechanisms of wafer edge related immersion defects is believed to be critical to the successful integration of the immersion process into semiconductor manufacturing. Process control at the wafer edge In semiconductor manufacturing, control of the process at the wafer edge is a key factor in determining the total number of yielding die on a wafer. The removal of photoresist from the wafer backside and edges is especially important to avoid contact between the resist and the scanner stage or wafer handling hardware. Typically, a solvent EBR rinse is the last step in the coating recipe: the combination of a solvent stream from a static nozzle toward the wafer back side and a dynamic nozzle toward the wafer front side dissolves the resist up to a few millimeters from the wafer's outer edge. The desired position of the EBR material edge at top side (the so-called EBR-width) can depend on the coated material (e.g., antireflective topcoat vs. photoresist material) and/or on the layer within the device: for example, a contact hole lithography process might use a slightly different EBR width than the gate process. To increase die yield, it is desirable to have EBR widths that are as small as possible. Immersion lithography [1][2][3][4] has changed the way we view defectivity issues at the wafer edge significantly. During the immersion exposure sequence, the wafer edge is in contact with the water from the immersion hood (IH), introducing additional concerns beyond direct contact of resist with the scanner. First, when the IH is scanning in the EBR region, its movement can damage material edges (Fig. 1a). IMEC's program on immersion lithography found that, for example, photoresist material can partially peel off during the IH pass ( Fig. 1b). A second concern involves the cleanliness of the wafer edge outside the EBR edges. The IH pass wets not only the near-edge top surface, but also the curved wafer edge and even part of the bottom surface. Defects can be released from this area and re-deposited either on the wafer or on the wafer stage. In the first case, there will be a direct impact on the wafer defectivity. In the latter case, defects present on the wafer stage can be transported onto wafers in subsequent wafer processing. IMEC's program on immersion lithography found that resist residues left on the curved wafer part by an incomplete EBR step can be damaged by the IH pass, releasing fragments into the system (Fig. 1c). Traditional defect inspection techniques have serious limitations when monitoring these new issues. Conventional dark field or bright field inspection tools cannot access the wafer edge since these systems typically have an edge exclusion of ~3mm. While microscopy tools can inspect the edge area, they can only give qualitative information with limited sampling. www.intechopen.com Technology for wafer edge defect inspection The new measurement system for wafer edge defect inspection (Fig. 2) is based on a laser source directed to the wafer edge surface. Three detectors simultaneously collect the scattered light, the specular or reflected light and the phase difference between different polarization states. As the laser scans the wafer edge surface, each signal can be converted into an image. Each type of defect produces a specific combination of signals from the three detection channels, allowing automated defect classification. Imaging of the wafer edge Imaging covers the entire edge region including the following areas: ~5mm bottom nearedge, bottom bevel, apex, top bevel, and ~5mm top near-edge. Scanning generates a continuous high-resolution image for the entire wafer edge, which can be represented as a Mercator projection: an unfolding of the wafer edge surface into a flat plane. Excursions in eccentricity and/or in EBR width, which might result in a layer's edge ending on the wrong underlying substrate, can be easily monitored using this kind of inspection. For wafer edge cleanliness, a high-resolution view of the wafer edge is typically more useful. Here, the images view only a few millimeter of the edge. Figure 3 uses this representation to show resist flakes observed along the apex-bevel regions in the specular channel. Immersion defect process characterization and optimization As indicated in the introduction, immersion-related defects at the wafer edge can be due to edge damage to the coated material in the EBR area, from the IH passing over this region. On the other hand, defects can be caused by transport of particles present on the bevel. These might be released by forces of the immersion hood, transported by the water in the hood, and re-deposited on the wafer and/or stage. This work focuses on the latter, and in particular on the flake defects observed in past work [5]. Edge region flake defects Flake defects are related to material residues that are present on the wafer edge after coating. Typically, these residues are only present on the apex of the bevel, and therefore are difficult to detect by conventional top-down inspection methods. The residues result from a non-optimized EBR process: since the coated material on the wafer edge can be significantly thicker than on the flat top region, an insufficient solvent supply can leave edge residues while the top surface is clean. This phenomenon is more commonly observed with photoresist materials, rather than with BARC and topcoat materials. The morphology of edge residues can depend on the resist. For some resists, the residue can be quite uniform along the apex. For other resists, large areas of thick residues are combined with areas with thin residues. Once detected, the problem can be solved by adjusting the EBR recipe. Because making the EBR recipe longer limits the throughput of the immersion cluster, however, fabs try to avoid this adjustment if possible, there is a clear risk of processing wafers with resist residues over to the immersion scanner. When wafers with resist residues are exposed on an immersion scanner, it is difficult to predict if the IH pass over the edge of the wafer will damage the resist residue, and if (part of) the residues will redeposit on the wafer top-side or on the scanner wafer stage. Tilted SEM review suggested qualitatively that such damage can happen with certain resists. Experimental conditions of edge flake characterization We experimented with three resists with different chemistries (Fig. 4). Sensitivity to edge damage was expected to vary across the three resist types. A dedicated exposure job spatially separated the areas where flakes are expected and where they are not expected. One section, consisting of two rows of 11 fields, was exposed close to the wafer edge at the opposite side of the notch. A similar area of two rows of 11 fields was exposed in the region of the notch. During the exposure of these 2 × 11 sections at both locations near the wafer www.intechopen.com edge (Region II), the IH makes continuous up-and down-scans over the wafer edge area, increasing the probability of defect generation. The exposure job was also designed so that on another part of the wafer (Region I, on the right hand side), the immersion hood did not pass over the wafer edge. In Region I, no flake-like defects were expected. Qualification The specular images of regions with resist residues clearly showed differences in reflected intensity: dark areas in the resist residues refer to thick layers, while light areas indicate much thinner layers. The results obtained from Resist Type A are detailed below. We compared the SideScan images of areas where the IH did and did not pass. Figure 5a is a typical SideScan specular image for region I (where the IH did not pass). Differences in thick and thin resist residues are visible, but no fragments of the resist residues are evident. In contrast, in Fig. 5b, taken from Region II, indicated that parts from the thick residue at the bottom of the apex were released. The close-up in the image indicates that some of these edge flakes were re-deposited on the apex closer to the top. To determine whether any of these edge flakes end up on the top region (where edge die can be damaged), we analyzed the TopScan image of the corresponding areas of Fig. 5a and 5b using the scatter channel, as shown in Figs. 5c and 5d. In Region II, a lot of particles were detected, while in Region I, no particles were observed in the images. This observation was encouraging for further ADC work. Classification of the edge region flakes was found to vary by the defect location. Redeposited edge flakes on the apex side were best classified using their signal in the specular www.intechopen.com www.intechopen.com channel of the SideScan image. For re-deposited defects in the top near-edge region, a combination of signals in the specular and scatter channels gave more accurate classification. Once all the measurement parameters for both areas are fixed, they can be combined in a single measurement sequence that provides defect classification and mapping for all the wafer edge areas of interest. (Fig. 6). Immersion process characterization and optimization Having qualified the inspection to classify and map edge flake defects, we used our results in a design of experiment to improve our understanding of this kind of defect source and its key impact parameters. As indicated above, Resist A tends to generate flakes when the IH is passing over its edge. The non-optimized coating process left residues for two other resists, resist B and C; however the residue morphology was different. When the same immersion exposure was used, significantly fewer edge flakes were detected in the near top region for Resist B and C than for Resist A (Fig. 7). Moreover, the residual defects were less confined to the exposure zone, so some of these defects might be caused by coating and wafer handling. In the TopScan images, no clear sign of damage was seen. Clearly the choice of resist chemistry can be important to prevent these kinds of defects. As indicated earlier, resist residues can be optimized by changing the EBR recipe on the coat track. Resist A showed several hundred defect flakes with the regular (short) EBR sequence. After optimization, this resist achieved defect values similar to the background values obtained with the non-flaking resists B and C. Further wafer edge challenges More kinds of defects besides the edge region flakes can be important in immersion litho. This section discusses other possible defect sources. www.intechopen.com Wafer handling marks and resist rework process A variety of artifacts were seen even in fresh Si wafers, primarily on the bevel and apex region. These wafers had very limited processing and handling, but damage was visible in the form of particles in the apex/bevel region. This introduces an additional concern with transport-related artifacts, and illustrates the need for an assessment of wafer edge quality and handling before introduction to the immersion process. Resist rework processes At IMEC, resist work is typically done by a combination of a dry ashing step, followed by a wet clean. In some cases, rework may be indicated to address an out-of-spec condition. Wafers used for monitoring of focus/dose/CD or overlay processes may be reworked daily. Limited rework typically results in an increased presence of scratches (typically at the lower bottom bevel), and an overall increase in reflectivity variation, indicating degraded surface quality. When wafers are reworked ~10 times or more, the bevel/apex area is much more affected. These defects could pose a risk when the immersion hood is passing over the wafer. Conclusion In this paper, we investigated the impact of immersion lithography on wafer edge defectivity. In the past, such work has been limited to inspection of the flat top part of the wafer edge due to the inspection challenges at the curved wafer edge and lack of a comprehensive defect inspection solution. Our study used a new automated edge inspection system that provides full wafer edge imaging and automatic defect classification. The work revealed several key challenges to controlling wafer edge-related defectivity, including choice of resist, optimization of EBR recipes, and wafer handling.
2018-12-05T19:35:59.378Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "7d6b2e79176d3b62b8be055d34a75504567e4e54", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5772/8169", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c5e28964b4d0d57b054f73271f30299d9ca5a0c9", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }